> <\body> <\hide-preamble> >|>>> >|> > >>> >>> > polynomial system solving>||<\author-affiliation> CNRS (UMR 7161, LIX) Laboratoire d'informatique de l'École polytechnique Campus de l'École polytechnique 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau, France |>>>|||<\author-affiliation> CNRS (UMR 7161, LIX) Laboratoire d'informatique de l'École polytechnique Campus de l'École polytechnique 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau, France |>>>||> We present a probabilistic Las Vegas algorithm for solving sufficiently generic square polynomial systems over finite fields. We achieve a nearly quadratic running time in the number of solutions, for densely represented input polynomials. We also prove a nearly linear bit complexity bound for polynomial systems with rational coefficients. Our results are obtained using the combination of the Kronecker solver and a new improved algorithm for fast multivariate modular composition. ||> Let > be an . This means that we have a data structure for representing elements of >, together with algorithms for performing the field operations. Consider ahomogeneous system of polynomial equations <\equation*> f=\=f=0, where \\,\,x|]>>. We are interested in the exact resolution of such a system through the computation of a of its zero-set by a so-called ; see section for a precise definition. Throughout this paper, we assume that ,\,f> are given in standard dense representation. The total degree of each > is written >, and we let \d*\*d>. Our algorithm requires the following : <\description> >>,\,f> is a ; >>The \,\,f|)>> are , which means radical over the algebraic closure |\>> of >, for ,n>; >>\2>, for ,n>. Conditions and formalize the idea of a \Pgeneric system\Q in the sense that they are likely to hold when the > admit random coefficients. Under these conditions it is well known that the system =\=f=0> admits exactly> geometrically regular solutions in the projective space > over |\>>. Condition is included for the sake of simplicity. It is not restrictive because linear equations can be removed by means of linear Gaussian elimination, and by performing a suitable linear change of the variables; see Remark. We prove new probabilistic complexity bounds for computing all the solutions in terms of a univariate parametrization by a primitive element. For finite fields >, our bound is arbitrarily close to quadratic in the number of the solutions whenever =\=d>: see Corollary. For rational coefficients of bounded height, we deduce a complexity bound that is nearly linear in the expected bit size of the output of the algorithm: see Theorem. These results improve upon the best previously known complexity bounds. We set >\max,\,d|)>>. In order to simplify the presentation of complexity bounds, we use notation: \|)>> means that =g*log+3|)>>; see25, section7> for technical details. The least integer larger or equal to is written|x|\>>; the largest one smaller or equal to is written |x|\>>. The >-vector space of polynomials of degree d> is written d>>. The remainder of the division of by is written . In order to express that an expression is explicitly computed modulo, we write . The of characteristic >> and cardinality *k>>, written >,k|)>>, is defined as <\equation*> GR>,k|)>\/p>*\|)>/|)>, with > monic and irreducible modulo .\ The complexity of polynomial system solving is a central question in effective algebraic geometry, which is still open. Except in some very special cases, no softly linear time algorithms are known. In order to simplify the discussion, we assume that \2> for ,n>. One known favorable special case concerns systems of bounded degrees > over =\>> that satisfy and : in suitable bit complexity models (computation trees, or random access memory machines), the variant of the Kronecker solver designed by Giusti, Lecerf and Salvy admits a softly linear running time for \>. This solver is probabilistic of Las Vegas type, with a well understood probability of failure that can easily be made arbitrarily small in practice; for details, see the recent work by Giménez and Matera. The Kronecker algorithm has a long history, lying deep in Kronecker's work, but it emerged after a series of seminal articles by Giusti, Heintz, Matera, Morais, Pardo, and their collaborators in the nineties; for the history of this algorithm along with references, we refer to the introduction of. Assume that the input polynomials are given as a straight-line program of size over> without division (see the definition in for instance) and that we use the computation tree model for our complexity measures. If the cardinality of> is sufficiently large and the conditions and hold, then the Kronecker solver computes a parametrization of the solution set in terms of a primitive element using <\equation> L*|)> operations in > and with a uniformly bounded probability of failure. The complexity bound essentially comes from1> by taking into account that \2*D>>; this is detailed in section below. If the > are all bounded, then we notice that |)>>>>, so the bound is essentially quadratic. In the other extreme case when is bounded, we rather have D>, so the bound becomes essentially cubic. In this paper, we show how to obtain a nearly quadratic bound for this case as well. Gröbner bases constitute another cornerstone for polynomial system solving. Algorithms in this setting are usually deterministic, but their complexities are highly intricate in general: they depend on both the system and the admissible ordering attached to monomials. Complexities of unfavourable cases are doubly exponential in ; see a summary of known families in21>. The first simply exponential bounds of the form >> for zero dimensional systems are due to Giusti, Lazard and Lakshman in the eighties. The constant hidden behind the latter \P\Q may be related to a feasible exponent > for linear algebra, in the sense that any two square matrices of size n> may be multiplied with >|)>> operations in their coefficient ring. Basically one may perform Gaussian elimination on Macaulay matrices: for instance, under and for the graduated reverse lexicographic ordering, the resolution requires >|)>> operations in >, where -1|)>> is the Macaulay bound; see, or26> for a proof from scratch, for instance. At present time it is not known how to exploit the structure of Macaulay matrices to perform Gaussian elimination in time quadratic in > or even in >>. One contribution to this problem, due to Canny, Kaltofen, and Yagati, relies on sparse polynomial interpolation and the Wiedemann solver. Further important results based on structured linear algebra have been established by Mourrain, Pan, and Ruatta: they proved that the number of roots and of real roots can be computed with *D*log D|)>> floating point operations, and that all the > (real) roots in a given box up to the error> can be computed with +\|)>*n*3*\*D*log D*log b|)>> floating point operations, where > and> depend linearly on as well as on the conditioning of the system. Yet another important advance here is Faugère's F5 algorithm, which suppresses useless rows in Macaulay matrices. However, its worst case complexity >|)>|)>> remains essentially cubic in>. Other methods derived or inspired by Macaulay's work have been developed by Mourrain and Trébuchet. For Gröbner bases of zero dimensional systems, monomial orderings may be changed quite conveniently by means of linear algebra. This is known as the FGLM algorithm (named after the initials of its authors, Faugère, Gianni, Lazard and Mora) and involves a cost |)>>. Recently, Faugère, Gaudry, Huot and Renault decreased it to >|)>>. Nevertheless one must keep in mind that the best practical values for > are only about , which is far from the best known theoretical bound 2.3728639>, due to Le Gall. Another aspect of the present paper concerns multivariate modular composition, that is the computation of ,\,g|)> rem h> where \,\,x|]>>, \> has degree, and ,\,g> are in d>>. When , the best known complexity bound |)>> for any field> is due to Brent and Kung. Their algorithm even yields asub-quadratic cost>|)>> when using fast linear algebra; see185>. Here the constant \1.5> is such that a\> matrix over > may be multiplied with another\n>> rectangular matrix with >|)>> operations in >. Huang and Pan proved that \1.667>; see10.1>. In3> we extended Brent and Kung's result to the multivariate setting. A major breakthrough for the modular composition problem is due to Kedlaya and Umans, in the case when> is afinite field>> and more generally a finite ring of the form /r*\|)>/|)>> for any integer and > monic of degree . For any fixed rational value \0>, they showed that ,\,g|)>>> can be computed modulo in time *k*log p|)>>|)>> whenever the partial degrees of are d>, and under the condition >>; see7.1>. Extensions to compositions modulo triangular sets can be found in. The first technical but important contribution of this paper concerns the refinement of Kedlaya and Umans' complexity result on modular composition. In fact, in Corollary we achieve a sharper complexity bound in terms of the total degree of , that is softly linear in the bit size > of the coefficients, and that holds without the assumption >>. We also handle fast evaluations of in a larger algebra of the form >/,y,Q|)>>, where 1> and is monic in of degree >: this is a particular case of composition modulo triangular sets studied in, for which we also improve upon the dependency in the coefficient size. These new results are based on our recent advances on multivariate multi-point evaluation algorithms. At amore abstract level, our algorithm applies to any field > over which fast multi-point multivariate polynomial evaluation exists. We recall that fast multivariate multi-point evaluation remains an important open problem: no efficient algorithms are known over ageneral abstract field > and we are not aware of any efficient implementations of Kedlaya and Umans' method; for some recent progress on special cases, we refer to. Our main contributions concern the complexity of polynomial system solving. We first prove a Las Vegas type probabilistic bit complexity bound *>|)>*>*k*log p|)>> over the finite field =\>>, for the dense representation of input polynomials, and where \0> is any fixed rational value: see Corollary. Whenever =\=d=>>, this bound is arbitrarily close to quadratic in the number of the roots =>> of the system. This improves upon previously known bounds. For example, the complexity bound simplifies to |)>>> whenever D>>>. With input polynomials given in dense representation, the quantity is of the order +n|n>>. If all the > remain bounded and if tends to infinity, then we have <\equation*> +n|n>\>|d!>. Consequently, the expected complexity of the Kronecker solver becomes |)>>, in terms of the number of arithmetic operations in >. However, if the > are no longer bounded, then +n|n>> may grow with |)>> and the worst case complexity of the solver becomes*>|)>>. Our next main result concerns the case =\>. We assume that ,\,f> have integer coefficients of bit size >>, and we achieve expected time *>|)>*n>*>|)>>. Whenever =\=d=>>, this time turns out to be nearly linear in the bit size of the output: see the precise bound in Theorem. This complexity bound admits major consequences for symbolic real solving, especially for algorithms based on the Kronecker solver; see for instance and references therein. Once a Kronecker parametrization is known, it is also possible to appeal to univariate numerical solvers to deduce complex or real approximations of the roots of the system; see for instance the book. Let us briefly describe the structure of this paper. Section is devoted to prerequisites about the complexity model and fast multipoint polynomial evaluation. The next section deals with various particular cases of multivariate modular composition involved in the Kronecker solver. Section concerns the algebraic data structures needed by the solver, along with general position and some of the randomness aspects. The solver is itself recalled in section, that ends with our main result over the finite fields. Our main complexity bound over the rational numbers is stated in the final section. It may be helpful to notice that the probability analysis of our algorithms contains two main stages. On the one hand, we need to put the system in general position and prepare the main data structures. This part is handled in section. On the other hand, for the sake of efficiency, some subalgorithms of the Kronecker solver involve additional random parameters that must be taken outside certain proper closed subvarieties (see Propositions and ), or integer parameters that must be coprime with a certain set of bad primes (see Lemma). The probability of picking suitable random parameters of this kind will be analyzed separately. Throughout this paper, we will be working in the Turing machine model with sufficiently many tapes (seven tapes are usually sufficient in order to implement basic routines on polynomials, series, and matrices in a natural way). The Kronecker algorithm is randomized, so it also requires a special instruction to generate a random symbol in one cell within constant time. In some cases, algebraic structures have a natural bit size (e.g. modular integers, finite fields); in other cases the size is variable (e.g. arrays, polynomials). In both cases elements are represented on tapes as sequences of symbols, followed by a specific termination symbol\Q\Q. The reader must keep in mind that heads of the machine can just move one cell left or right at each time step. Algorithms must take care of using data in the most contiguous way as possible, and loop counters cannot be used for free. The rest of the section gathers standard data types and elementary operations needed in the next sections. We freely use well known complexity bounds on polynomials and matrices from; details about Turing machine implementations of these algorithms can be found in. We use binary representation for integers. A modular integer in /r*\> is represented by its natural preimage in ,r-1|}>>. Integers can be added in linear time and multiplied in softly linear time >; see and historical references therein. Integer divisions and extended gcds in bit size n> also take softly linear time. We will also use truncated -adic integers, for which we refer the interested reader to for practical algorithms. One dimensional arrays are sequences of elements terminated by \P\Q. For example the vector \\> is stored as . For bidimensional arrays we use column-major representation. This means that an array |)>i\r,1\j\c>> of size c> ( rows and columns), is stored as the vector of its columns, ,\,A|)>,,\,A|)>,\,,\,A|)>|)>>. Such arrays are allowed to contain elements of different types and sizes. For example the matrix |>||>>>>> over > is stored as . In the Turing machine model, we recall that the transposition of a bidimensional array can be achieved in softly linear time: <\lemma> 2.3>> Let |)>> be an c> matrix. Let > denote the size of > for all , and define b>. Then can be transposed in time *log min|)>>. For univariate polynomials we use dense representation, which means that a polynomial of degree is stored as the vector of its coefficients from degrees to . Additions and subtractions take linear time in . Multiplying two polynomials over finite fields may be achieved in softly linear time. In the present paper we consider finite rings of the form \/r*\|)>/|)>> with 2> and \/r*\|)>> monic of degree . Elements of > are stored as their natural preimage in k>> with coefficients in ,r-1|}>>. Additions and subtractions in > take linear time, products take time >; see for instanceII>. <\lemma> 2.12>> Let \/r*\|)>/|)>> be as above and let ,\,A> be polynomials in d>>. For any sequence of points ,\,\> in > we may compute |)>,\,A|)>,\,A|)>,\,A|)>> in time *k*log r|)>>. <\lemma> 2.14>> Let \GR>,k|)>> be a Galois ring, let ,\,\> be elements in > such that -\> is invertible modulo for all j>, and let ,\,b> be vectors in >. We may compute the unique polynomials ,\,A> in N>> such that |)>=b> for ,N> and ,m> in time *k*log p|)>>. For a polynomial \,\,x|]>> we use the , by regarding as an element of |]>|]>\|]>>. In particular, admits the same representation as its expansion +f*x+\+f-1>*x-1>\\,\,x|]>|]>> as a univariate polynomial in>. In our algorithms, the number of variables is not part of the representation of, so it must be supplied as a separate parameter. The of \,\,x|]>> is defined as the set of monomials with nonzero coefficients and we write > for its cardinality. Assuming that, apart from the mandatory trailing \P#\Q symbol, the representations of coefficients in > do not involve other \P\Q symbols (this can always be achieved through suitable renamings \>>>), we denote the number of \P\Q symbols involved in the representation of by >. We notice that \>. For the remainder of this subsection, we assume that the size of elements in > is bounded by a constant >>. <\example> The univariate polynomial =c+c*x+\+c*x> of degree is represented by #c#\c##>. The bivariate polynomial ,x|)>=c+c*x++c*x|)>*x> is represented by #c##c#c###>. <\lemma> 2.5>> Let \,\,x|]>> be of partial degree \> in > for ,n>. Then \\*\*\+1\n*\+1>, where \\*\*\\>. <\lemma> 2.6>> Let \,\,x|]>> be a non-zero polynomial of total degree d>. Then \\n*\+1>, where \\>. <\lemma> 2.7>> The partial degree bounds =1+deg> f,\,\=1+deg>f> of anon-zero polynomial \,\,x|]>> can be computed in time *|)>+*>|)>>, where \\*\*\>. <\lemma> 2.8>> The total degree of a polynomial \,\,x|]>> can be computed in time *log+*>|)>>, with the convention that -1>. <\lemma> Let \/r*\|)>/|)>> be as above. Any partial derivative of \,\,x|]>> of total degree d> can be computed in time *>, where \>. <\proof> In order to compute the partial derivative in >, we run over all the coefficients of in time *+O|)>=n*\*> by Lemma. If we are interested in the derivative in > with n>, then we reinterpret as an element of >,\,x|]>\,x|]>> and its derivative requires time *+O|)>=n*\*>. <\lemma> Let \/r*\|)>/|)>> be as above, let \,\,x|]>> be of total degree, and let \\>. Then ,x,\,x|)>> may be computed in time *+O|)>=n*\*>, where \>. <\proof> We use Horner's method to evaluate univariate polynomials over > at > in softly linear time. In total, the machine runs over the entire representation of and the time spent in arithmetic operations is at most *>. <\lemma> 2.13>> Let \/r*\|)>/|)>> be as above, let \M>,\,x|]>>, and let ,\,\> be elements in >. Then ,x,\,x|)>,\,f,x,\,x|)>> can be computed in time <\equation*> \** M|)>+min,log N|)>|)>*+O|)>, where > is the cardinality of the support of in the variables ,\,x>. At several places we will use the following bound on the cardinality of the support of a polynomial in variables and total degree 2>: <\equation> |d>==+|)>\. For the sake of efficiency, a in the variables ,\,x> has adedicated representation made of >,\,x|)>\f,\,x|)>> and . In this way, can be recovered as *f>/x,\,x/x|)>>. The problem called over an effective ring > is the following: given \,\,x|]>> and points ,\,\> in >, compute |)>,\,f|)>>. The first following statement concerns the cost of the naive evaluation algorithm in terms of the total degree of . <\lemma> 3.8>> Let \/r*\|)>/|)>> with > monic of degree be as above, let \,\,x|]>> be of total degree d>. Then we may evaluate at a point ,\,a|)>>\\>> in time *>. In terms of the partial degrees of , we will need the following better complexity bounds that refine previous work by Kedlaya and Umans. The first theorem takes partial degrees into account, while the second one is in terms of the total degree. <\theorem> 4.5>> Let \0> be a fixed rational value. Let \/r*\|)>/|)>> with > monic of degree, let \,\,x|]>> be of partial degree\> in > for ,n>, and let ,\,\> be a sequence of points in >. Then we may compute |)>,\,f|)>> in time <\equation*> |)>**log|)>|)>|)>**\*k*log r|)>. <\theorem> 4.6>> Let \0> be a fixed rational value. Let \/r*\|)>/|)>> with > monic of degree, let \,\,x|]>> be of total degreed>, and let ,\,\> be a sequence of points in >. Then we may compute |)>,\,f|)>> in time <\equation*> |)>**log|)>|)>**d**k*log r|)>. The Kronecker solver needs to perform a linear change of variables of matrix into the input homogeneous polynomials. Let be such a polynomial in >,k|)>,\,x|]>>> of total degree, and let |)>i\n,0\j\n>> be a \> matrix over >,k|)>>,\,x|]>>>. We need to compute <\equation*> N|)>,\,x|)>\f*x+\+N*x,\,N*x+\+N*x|)>. A natural idea is to interpolate N|)>,\,x|)>> from its values at >, where is asubset of >> of cardinality . Such an interpolation is known to be computable in time *\*k*log p|)>>; seeLemma of the appendix. On the other hand the values of at \S|)>> may be obtained in time |)>*>*\*k*log p|)>> by5.7>, for any fixed rational value \0>. However, this approach suffers from two drawbacks: it takes neither the homogeneity of nor the structure of \S|)>> into account. At the end of Appendix, we show the following sharper complexity result: <\proposition> Let GR>,k|)>,\,x|]>> be homogeneous of degree 2>, and let be an\> matrix over >,k|)>>. If \d> and if a LU-decomposition of is given, then we may compute N> in time *\*k*log p|)>>. This section is devoted to multivariate modular composition, that is the evaluation of amultivariate polynomial in ,\,x|]>> at a point in >, where \\/|)>> with> monic and 1>. As recalled in the introduction, no fast algorithm is known for this task over a general ring or field >. For the Kronecker solver presented in this paper, the following particular instances of > are needed: >/|)>> and /p>*\>, along with tangent numbers over theses rings. In this section, we design fast algorithms for these cases, on the basis of Kedlaya and Umans' ideas, and new variants from. If =1>, then >,k|)>> is the finite field >>, whereas corresponds to the ring /p>*\> of the -adic integers truncated to precision>. Until the end of the section we set <\equation> \\GR>,k|)>/,y|)>, and <\equation*> \\\/|)>, where \> is a monic polynomial of degree 1> in . Let ,d,N|)>> represent a cost function for the multi-point evaluation of -variate polynomials over >,k|)>> with partial degrees \>, total degree d>, and for N> evaluation points. The following algorithm reduces modular composition to multi-point evaluation. <\specified-algorithm> <\description-compact> \,\,x|]>> of partial degrees \> and of total degree d>; ,\,A> in>. ,\,A|)>>. \d*-1|)>>. <|specified-algorithm> <\enumerate> Write <\equation*> \\>\>+>*e for the canonical preimage of > in >,k|)>>, where >> and >> belong to >,k|)>> and have partial degrees M> in and D> in. In a similar manner write <\equation*> >=>,\,x|)>+>,\,x|)>*e for the preimage of in >,k|)>,\,x|]>>. Build points ,\,\-1|)>>\GR>,k|)>> whose reductions modulo are pairwise distinct. For ,d*> and for ,d*>, compute <\equation*> \\>,\|)>,\,>,\|)>|)>>,\|)>,\,>,\|)>|)>. For ,d*> compute ,\,x|)>\>,x,\,x|)>> and ,\,x|)>\>,x,\,x|)>>>. For ,d*> and for ,d*>, compute <\equation*> \\\|)>\\\|)>+\|\x>|)>*>,\|)>. Interpolate the polynomials > and > in >,k|)>>, of partial degrees d*>> in and d*> in , and such that ,\|)>=\> and ,\|)>=\> for all ,d*>> and ,d*>. Return +h*e> reduced modulo ,y,Q|)>>. <\proposition> Algorithm is correct and takes time <\equation*> *d*M*,d,d*D|)>+*M*D*\*k*log p|)>+*\*k*log p|)>, whenever 2>. <\proof> First we verify that <\eqnarray*> ||>>,\,>|)>=>>,\,>|)>+>>,\,>|)>*e mod e>>|||>>,\,>|)>+>>,\,>|)>+>|\x>>,\,>|)>*>|)>*e mod e.>>>> Therefore, for all ,d*> and ,d*> we obtain <\eqnarray*> ||>>,\,>|)>,\|)>>>||||)>+|)>+\|\x>|)>*>,\|)>|)>*e mod e>>|||,\|)>+h,\|)>*e mod e.>>>> Since >>,\,>|)> mod e> has partial degrees d*> in and d*> in , we deduce <\equation*> >>,\,>|)> mod e=h+h*e, whence the correctness of the algorithm. The rewriting in step1 takes linear time. The construction of the points in step2 costs *\*k*log p|)>>. In step3 we appeal to Lemma in order to obtain <\equation*> >,t|)>,\,>,t|)>,\,>>,t|)>>,\,>>,t|)> in time *k*log p|)>>, and then to obtain <\equation*> >,\|)>,\,>,\|)>>,\,>,\>|)>,\,>,\>|)> for ,d*> in time *M*D*\*k*log p|)>>. Notice that the required data reorganizations can be done in softly linear time thanks to Lemma. The cost of step4 is provided by Lemma, <\equation*> \*d*M* M+log|)>**k*log p|)>+O|)>, and simplifies to *\*k*log p|)>> by and Lemma. The derivations in step5 incur *\*k*log p|)>> by Lemma. Taking into account data reorganizations Lemma, the total cost of step5 is <\equation*> *d*M*,d,d*D|)>+*M*D*\*k*log p|)>+*\*k*log p|)>. In step6, the interpolations amount to *M*D*\*k*log p|)>> by Lemma. Finally step7 takes additional time *M*D*\*k*log p|)>>. Our main complexity bound for modular composition is presented in the next theorem, under the condition that sufficiently many evaluation points are at our disposal. For convenience we recall the following lemma. <\lemma> 4.7>> For all positive integers we have <\equation*> log \n*log|)>+d*log|1+|>. <\theorem> Let \0> be a fixed rational value, assume that 2> and that <\equation> p\max|)>*>+1|)>*n*d/8>|)>*|)>-1|)>. Let \,\,x|]>> be of total degree d> and let ,\,A> be in \\/|)>>, with monic in of degree d>. Then ,\,A|)>> can be computed in time <\equation*> |)>*n>*\*k*log p|)>. <\proof> Let \0> be such that <\equation*> log|)>+\*log|)>\log|)>. First we examine the case \*n>. Lemma and the fact that the function <\equation*> \\log|)>+\*log|)> is nondecreasing yield <\equation*> log \n*|)>+*log|)>|)>\|)>+\*log|)>|)>*n, so we have \|)>>. By adapting Lemma to >, the cost of the naive modular evaluation is |)>**\*k*log p|)>=|)>*n>*\*k*log p|)>>, since 2>. From now we assume that *n\d> holds. We set |8/\|\>>. If is bounded then so is and the above adaptation of Lemma may again be used to achieve a naive evaluation cost in *\*k*log p|)>>. Consequently, we may further assume that is sufficiently large to satisfy <\equation> |)>*log|)>*d|)>\d>,d+1\d>,m\log d. We perform the multivariate Kronecker segmentation of into >> which has >\m*n> variables and partial degrees |\>> as follows. We let <\equation*> \\d+1,|\>\|\|\>i=1,\,m-1, |\>\|\/|\>*\*|\>|)>|\>, and introduce the Kronecker substitution map <\equation*> ||\>,\,|\>>:GR>,k|)>,\,x,\,x,\,x>|]>>|>|>,k|)>,\,x|]>>>|>|>||\>*\*|\>>i=1,\,n,j=1,\,m.>>>>> We set >\K|\>,\,|\>>> and >\|\>+\+|\>|)>*n->>. Observe that we have |\>=\=|\>\|\>> and \|\>*\*|\>>. From <\equation*> \-1\|\>\\i=1,\,m-1 and <\equation*> \/|\>*\*|\>|)>\|\>\\/|\>*\*|\>|)>+1 we deduce that <\equation*> |\>*\*|\>\\+\/m>=\*|)>. Still by allowing to be sufficiently large we may further assume that <\equation> |\>*\*|\>\|)>*\and>|\>\|)>*d. On the one hand, by5.1> and, the computation of >> takes time <\equation*> |)>**\*k*log p|)>=|)>*|)>*n>*\*k*log p|)>. On the other hand, for ,n> we compute <\equation*> >,\,>|)>\,A|\>>,\,A|\>*\*|\>>|)>, by binary powering, in time *\*k*log p|)>>. We may now compute ,\,A|)>> as <\equation*> f,\,A|)>=>>,\,>,\,>,\,>|)>. Since ensures <\equation*> >\|\>+\+|\>|)>*n\|)>*m*n*d\|)>*>+1|)>*n*d/8>, the condition allows us to combine Proposition and Theorem. This yields the complexity bound <\eqnarray*> ||>+2|)>*>*M*>,|\>,>,>*d|)>+>*>*M*d*\*k*log p|)>+>*\*k*log p|)>>>||>|>+2|)>*>*M*>,|\>,>,>*d|)>+>*M*d*\*k*log p|)>>>||>||)>*>*M*>*d+|\>*log|\>|)>|)>|)>**|\>*\*k*log p|)>>>|||+>*M*d*\*k*log p|)>>>||||)>*M*+|)>*log|)>*m*n*d|)>|)>|)>>>|||\>*\*k*log p|)>)>>>||>||)>*M*+*log d|)>|)>*>*\*k*log p|)>,>>>> for a suitable constant . If *n>\*log d|)>>, then this bound further simplifies to <\equation*> |)>*|)>*n+\>*\*k*log p|)>. Otherwise we have *n>\*log d|)>> which implies that >>. In this case, the condition allows us to combine Proposition and Theorem, in order to obtain ,\,A|)>> in time <\eqnarray*> ||*d*M*,d,d*d|)>+*M*d*\*k*log p|)>>>||>||)>*d*M*+*log|)>|)>**d**\*k*log p|)>>>|||+*\*k*log p|)>>>||||)>*|)>*log|)>*d|)>|)>**\*k*log p|)>,>>>> which simplifies to |)>*|)>*n>*\*k*log p|)>> thanks to>. Overall we have proved a complexity bound |)>*|)>*n>*\*k*log p|)>>. Taking into account that log d>, from, it follows that |)>\d*n>>, which concludes the proof. In order to generalize Theorem to small residue fields, namely whenever inequality does not hold, we extend the cardinality of the ground ring. We will rely on the following construction. <\lemma> Let >,k|)>=/p>*\|)>/|)>> be a Galois ring, and let be an integer that is coprime with . Then we may compute a monic irreducible polynomial \\> of degree in time *log p|)>> with any fixed arbitrarily low probability of failure, or in expected time *log p|)>> with a Las Vegas algorithm. Then we may build the Galois ring >,k*l|)>> as /p>*\|)>/|)>>, such that the following isomorphism holds: <\eqnarray*> :/p>*\|)>/,\|)>>|>|/p>*\|)>/|)>>>||>|>>||>|\u/\,>>>> for some polynomials > and > in /p>*\|)>k*l>>. The polynomials >, > and > can be computed in time +2>*\*k*log p|)>>. Each application of > and > takes time *\*k*log p|)>>. <\proof> The polynomial > can be obtained by using Ben-Or's randomized algorithm14.42>. Since and are coprime, a natural candidate for > is the of > and > (usually written \\>), <\equation> \\|)>=0>|)>=0>*\|)>=|)>=0>\*\*z|)>. It is well known that > is irreducible of degree . Let |~>\\*u|)> mod \>, that can be computed in time *k*log p|)>>. We construct > as the monic part of the resultant ,|~>|)>> in and we write *y-B> for the corresponding subresultant of degree in . For subresultants over Galois rings, we need to compute the necessary minors of the Sylvester matrix by means of Berkowitz' algorithm. In this manner >,, and can be obtained in time +2>*\*k*log p|)>>. It is well known that is invertible modulo >, so we compute \A*B mod \> and \u/\> in time *k*log p|)>>.\ Let us consider an element \GR>,k|)>> represented by \/p>*\|)>k>>, and <\equation*> A\Tr>,l|)>//p>*\|)>>*u|)>*||~>>|)> mod |~>|)>*||~>>|)>, where >,l|)>=/p>*\|)>/|)>>. Equivalently, we have <\equation*> A=|)>=0>*u|)>*|\*u|)>>|)> mod \*u|)>|)>*|\*u|)>>, which corresponds to the usual Chinese remaindering formula associated to the factorization of>. We observe that rem \*u|)>=a*u|)>> for all roots > of >. Consequently, for all roots *\> of > with |)>=0>, we have *\|)>=a|)>>. It follows that actually represents|)>>. It can be computed in time *\*k*log p|)>>. \ In the other direction, given a preimage > of /p>*\|)>/|)>>, we obtain > as mod ,\|)>>, in time *\*k*log p|)>>. <\remark> In the latter proof we could have appealed to faster algorithms to build irreducible polynomials, such as the ones of. In addition, faster algorithms are known to obtain> in specific cases; see. For simplicity we have preferred to restrict to general well known algorithms, because the complexity bound of Lemma is to be used only when is rather small. The following corollary aims at completing Theorem for small residue fields. It will not be used in the sequel because the extension of the residue field will instead be handled at the top level of the Kronecker solver. <\corollary> Let \0> be a fixed rational value, let > be as in, and assume that 2>. Let \,\,x|]>> be of total degree d> and let ,\,A> be in \\/|)>>, with monic in of degree d>. Then ,\,A|)>> can be computed using a probabilistic Las Vegas algorithm in expected time <\equation*> |)>*n>*\*k*log p|)>. <\proof> Notice that the total degree of may be obtained in time *\*k*log p|)>> by Lemmas, and inequality. It remains to handle the case when does not hold in Theorem. In other words, assume that <\equation*> k*log p=O. We first compute the smallest integer >> such that <\equation*> p>>\max|)>*>+1|)>*n*d/8>|)>*|)>-1|)>, that is <\equation*> >\||)>*>+1|)>*n*d/8>|)>*|)>-1|)>|)>|k*log p>|\>=O|)>, and we set <\equation*> l\mini\\,i*k+1\>|}>, so is coprime to and <\equation*> k*l*log p=O|)>. We next apply Lemma, from which we borrow the notation: \\> of degree is obtained in expected time *log p|)>=|)>>; the computation of>,>, and> requires time +2>*\*k*log p|)>=+2|)>>*\|)>>. With this construction in hand we set\ <\equation*> |\>\GR>,k*l|)>/,y|)>, and <\equation*> |\>\|\>/|)>, and we proceed as follows for the modular composition problem: <\itemize> We cast into |\>> in time *l*\*k*log p|)>=*\|)>>; We cast into |\>,\,x|]>> in time *l*\*k*log p|)>=*\|)>>; We cast ,\,A> into |\>> in time *l*\*k*log p|)>=*\|)>>; By Theorem, we can evaluate ,\,A|)>> in |\>> in time <\equation*> |)>*n>*\*k*log p|)>. We cast ,\,A|)>> into > in time *\|)>>. Adding up, we obtain the claimed complexity bound\Vup to replacing > by /3> from the outset. Consider a polynomial system =\=f=0> over an effective field >, which satisfies the regularity assumptions , , and . In the next section, we will recall the Kronecker algorithm that solves such a system. But before, we prepare the terrain by presenting afew concepts and data structures that will be necessary, and by discussing how to put the system into a sufficiently general position a random linear change of coordinates. Under our regularity assumptions, and letting \,\,f|)>>, the coordinate ring \\,\,x|]>/I> has dimension and degree \d*\*d> by the Bézout theorem. In particular the system =\=f=0> admits > distinct solutions in >, which are all regular. After applying a generic affine change of coordinates, the Kronecker solver first solves the system =0> then =f=0,>..., and so on until =\=f=0>. We first study how to perform this change of variables. Next we explain how positive dimensional solution sets of the intermediate systems =\=f=0> are represented. An homogeneous ideal of ,\,x|]>> is said to be in when ,\,x|]>>=> and \\,\,x|]>/I> is an integral ring extension of ,\,x|]>>, where dim \>. Let be the column vector with entries ,\,x>. If is an invertible \> matrix over >, then we write <\equation*> I\M\\f\I|)>. If is generated by a proper regular sequence ,\,f>, then we say that a matrix puts the intermediate ideals \M> in if \M> is in Noether position for all ,n>. Let us now examine the probability that this happens for an upper triangular matrix with ones on the diagonal and random entries above the diagonal. <\lemma> Let ,\,f> satisfy , , and . Given a finite subset of > of cardinality >, consider the matrix <\equation*> M\||||||>|>|>|>|>>|||>||>|>||||>||>>||||>|>|>||||||>>||||||>>>> where the > are taken at random in . Then the probability that puts \M,\,I\M> into simultaneous Noether position is at least |>>. <\proof> We use the incremental method described in the proof of1.9> as follows (for historical references, we also refer to). The values ,\,\|)>> are taken such that the coefficient of >> in +\*x,\,x+\*x,x|)>> is non-zero. For such values the ideal \M> is in Noether position. Then we consider the determinant > of the multiplication endomorphism by > in ,\,x|)>|]>/\M|)>>: it belongs to ,\,x|]>\\M|)>> and has degree >. The values ,\,\|)>> such that the coefficient of >> in +\*x,\,x+\*x,x|)>> is non-zero put \M> into Noether position. By induction we consider the determinant > of the multiplication endomorphism by > in ,\,x|)>,\,x|]>/\M|)>>: it belongs to ,\,x|]>\\M|)>> and has degree >. The values ,\,\|)>> such that the coefficient of >> in +\*x,\,x+\*x,x|)>> is non-zero put \M> into Noether position. By taking into account the fact that \2*D>, the Schwartz\UZippel lemma leads to aprobability of success at least <\equation*> |>|)>*|>|)>*\*|>|)>\1->*+\+D|)>\1-|>. Let be an absolutely radical ideal of ,\,x|]>> such that the coordinate ring \\,\,x|]>/I> is zero dimensional\Vhere we consider the affine case when is not necessarily homogeneous. A polynomial \,\,x|]>> is said to be a for> if the projections of its powers generate> as a >-vector space. Let represent the degree of > ( its dimension as a >-vector space). <\lemma> Under the above assumptions, given a finite subset of >, the probability that arandom vector ,\,\|)>\S> induces a primitive element *x+\+\*x> of > is1-2*D/>>. <\proof> The minimal polynomial > of in > actually belongs to ,\,\|]>> when the> are regarded as parameters, and the total degree in the > is D>; see for instance3.4>. The points ,\,\|)>> to be avoided are the zeros of the discriminant of, seen as a polynomial of total degree D*> in ,\,\>. We conclude by the aforementioned Schwartz\UZippel lemma. For a primitive element there exist unique polynomials ,\,v\\> such that , is monic and separable, \D>, and <\equation*> I=,x-v,\,x-v|)>. The polynomials ,\,v> are called the of by . Equivalently, we may define =qv rem q> for ,n>, whence <\equation*> I=,q*x-w,\,q*x-w|)>. These polynomials ,\,w> are called the of by ; such aparametrization is uniquely determined by . <\lemma> Let ,\,f> satisfy , , and and such that the > are simultaneously in Noether position for ,n>. Given a finite subset of >, the probability that a vector ,\,\|)>>\S> induces a primitive element *x+\+\*x> common to all \,\,x|)>>,\,x|]>>/I>> for ,n> is 1-4*D/>. <\proof> Each > has dimension > as a ,\,x|)>>-vector space2.4 and the relationship to the geometric definition of the degree of an algebraic variety in Corollary3.10>. By the previous lemma and the fact that \2*D>, the probability that is primitive for all > is at least <\equation*> |>|)>*|>|)>*\*|>|)>\1->*+\+D|)>\1-|>. Let ,\,f> satisfy , , and , be in simultaneous Noether position, and let <\equation*> \\\,\,x|)>,\,x|]>/I for ,n>. Thanks to Noether positions, the system =\=f=0> has all its roots in the affine space. More generally given a point ,\,a|)>\\> all the roots of +-a*x,\,x-a*x|)>>> are in the affine space. A point ,\,a|)>\\> is called a if +-a*x,\,x-a*x|)>> is absolutely radical. <\lemma> Under the latter conditions, given a finite subset of >, the probability that a vector ,\,a|)>\S> is a simultaneous lifting point for ,\,I> is1-4*D/>. <\proof> Without loss of generality we may assume that > is algebraically closed, so we may consider a primitive element that is common to all >, thanks to Lemma. The minimal polynomial > of in > is homogeneous in ,\,x,t|]>> of degree >, and is monic and separable in . Any point ,\,a|)>> such that the specialization of at =a,\,x=a> is separable ensures that +-a*x,\,x-a*x|)>> is absolutely radical; see for instance3.6>. The probability of picking such a point at random in > is at least /> by the Schwartz\UZippel lemma. The probability that ,\,a|)>\S> satisfies the requested property is thus at least <\equation*> |>|)>*|>|)>*\*|>|)>\1->*+\+D|)>\1-|>, again by using \2*D>. Assume that we are given a polynomial system =\=f=0> that satisfies , and. In order the make the Kronecker algorithm work, the following conditions are required for,n>: <\description-compact> >>> is in Noether position, >>the ideal >\I+-1,x,\,x|)>> of ,\,x|]>> is absolutely radical. <\lemma> Let ,\,f> satisfy , and, and let be a finite subset of >. If we take |)>i\j\n>> in /2>> at random, as well as ,\,a|)>> in >, and if we let <\equation*> N\||||||>|>|>|>|>>|||>||>|>||||>||>>||||>|>|>||||||>>||||||>>>>*|||||>|>|||||>||||||>|>||>|>||>|||||>|>|>|||||>>>>, then and are satisfied with probability 1-5*D/> after replacing ,\,f> by \N,\,f\N>>. <\proof> We first pick up a matrix at random as in Lemma in order to get simultaneous Noether positions. Then comes the choice of the lifting point, for which we use the strategy presented in the proof of Lemma. At the end of the resolution, unless the execution has failed, we expect to obtain an invertible >\> matrix over > and polynomials ,\,v> such that is separable of degree >, the > have degree D>, and <\equation*> I\N+-1|)>=,x-1,x-v,\,x-v|)>. On the other hand, being absolutely radical, its roots are all regular, is separable, and the value of the Jacobian matrix of \N,\,f\N> in ,\,x> at ,\,v> is invertible modulo >; see for instance4.2>. One natural question at the end of a probabilistic resolution process is to determine whether the computed solution set is correct or not. In the case of sufficiently generic systems, namely satisfying > and , it is sufficient to verify that > regular solutions have been found. This idea is formalized in the following proposition, which turns aprobabilistic resolution algorithm into a Las Vegas one.\ <\proposition> Consider homogeneous polynomials ,\,f> in ,\,x|]>>, an invertible \> matrix, alinear form \,\,x|]>>, and polynomials ,\,v> in > that satisfy the following conditions: <\itemize> is separable of degree >, the > have degree D>, ,\,v|)>=t>, \N|)>,\,v|)>=0 rem q> for all ,n>, the Jacobian matrix in ,\,x> of \N,\,f\N> is invertible at ,\,v|)>> modulo>. Then, > and > are satisfied, the ideal N=\N,\,f\N|)>> is in Noether position, and ,\,v> form a parametrization of N+-1|)>> by. Let \0> be a fixed rational value, and assume that holds. If =\>> and if <\equation*> p\max>,|)>*>+1|)>*n*>/8>|)>*>-1|)>, then the above conditions can be checked in time >|)>*n>*k*log p|)>>. <\proof> The assumptions ensure that the system \N=\=f\N=0> admits > distinct isolated regular solutions in the affine subspace defined by =1>. By Bézout's theorem, this system has therefore no solution at infinity, namely in the hyperplane defined by=0>. For ,n>, let ,\,f|)>> denote the variety of zeros of =\=f=0>. Let us first show that is satisfied. If it were not, then there would exist a smallest index n> for which ,\,f|)>=n-i> and ,\,f|)>=n-i>. In other words ,\,f|)>>> would have a (non-empty) equidimensional component of dimension and a possibly empty component > of dimension . Heintz' version of the Bézout theorem1> would imply that <\equation*> deg W+deg W\D*d=D, whence \D>. Therefore the number of isolated roots of =\=f=0> would be <\equation*> deg\\,\,f|)>|)>\D*d*\*d\D, which contradicts the fact that =\=f=0> admits > distinct isolated regular solutions. Let be in ,n|}>>. We consider a non-zero homogeneous polynomial ,x|)>> in N>. We write into ,x|)>=c|)>*h,x|)>> with primitive as a polynomial of |]>|]>> and |)>> a monomial in >. In particular is monic in > and vanishes at all the points of N|)>>, so it belongs toN>>. This shows that the class of > in ,\,x|]>/N|)>> is integral over |]>> when seen as an element of ,\,x|]>/N|)>>. We deduce that N> is in Noether position; for instance by using1.12>. Now let us show that R> is satisfied. Consider an irreducible component of ,\,f|)>>> for some index n> and then a point in \,\,f|)>>, which is non-empty in the projective setting. This point is a solution of =\=f=0>, and as such, the value of the Jacobian matrix of ,\,f> at has full rank by hypothesis. Therefore, the Jacobian matrix of ,\,f> has full rank over a Zariski dense subvariety of. It follows that the primary component associated to is necessarily prime. On the other hand, R> implies that > is unmixed by7, section17>; in other words, > has no embedded primary ideal. Consequently > is absolutely radical. Assume now that =\>>, and let us examine complexities. Testing the separability of boils down to computing its gcd with>, which takes time *k*log p|)>>. We compute \N,\,f\N> in time >*k*log p|)>> by Proposition. Then we compute the Jacobian matrix in ,\,x> of \N,\,f\N> in time >*k*log p|)>> by Lemma. The evaluations of all the \N> and of at ,\,v|)>> in >/|)>> can be done in time >|)>*n>*k*log p|)>> by means of Theorem\Vrecall that these homogeneous polynomials to be evaluated are already represented by their specialization at =1>. Finally the determinant of the latter value of is obtained without division thanks to Berkowitz' algorithm in time +1>**k*log p|)>>. Testing the invertibility of this determinant simply requires additional time *k*log p|)>>. Let > be an effective field and let ,\,f> be homogeneous polynomials in ,\,x|]>>. Throughout this section, we assume that conditions , , , , and are satisfied. The Kronecker solver is incremental in the number of equations to be solved. More precisely, at stage, we assume that a parametrization of the ideal <\equation*> >=,\,f|)>+-1,x,\,x|)>\\,\,x|]> by a primitive element *x+\+\*x> is given, so that <\equation*> >=,x-v,\,x-v|)>. We will build primitive elements for all intermediate ideals from a unique tuple ,\,\|)>>\\>. In order to obtain a similar parametrization for >>, we apply the two following steps. <\description> The parametrization of >> is lifted into a parametrization of <\equation*> \,\,f|)>+-1,x,\,x|)>\\|)>,\,x|]> of the form <\equation*> =,*x-,\,*x-|)> where > is a monic polynomial of degree > in |)>>, and the > are in |)>D>>. It turns out that ,,\,> actually belong to |]>> and have total degrees D>; see for instance3.4>. We will compute them by avariant of the Newton\UHensel lifting strategy over |]>|]>>. Geometrically speaking > represents the affine curve |)>\\-1>,x,\,x|)>>. The intersection of this curve with the hypersurface |)>> corresponds to >|)>>. The minimal polynomial of \*x+\+\*x> modulo >> will be computed as a resultant, and the rest of the parametrization of >> is completed by means of asuitable deformation technique. We define ,\,V> to be such that this parametrization is of the form <\equation*> >=,x-V,\,x-V|)>. The lifting step essentially relies on a multivariate variant of the Hensel lemma (or, equivalently, on a multivariate power series analogue of Newton's method). The Jacobian matrix of ,\,f> in ,\,x> is written <\equation*> J\ f|\ x>>|>| f|\ x>>>|>||>>| f|\ x>>|>| f|\ x>>>>>>. The identity matrix of size i> is written >. <\specified-algorithm> <\description-compact> ,\,f\\,\,x|]>> and the parametrization ,\,v\\> of >> by *x+\+\*x>. The Kronecker parametrization ,,\,> in |)>> of > by . ,\,f> satisfy , , , , . <|specified-algorithm> <\enumerate> Initialize 1>, \q>, \v>,\, \v>. Initialize with the inverse of J,0,v,\,v|)>> modulo >. While D+1> do the following: <\enumerate> update J,0,x,,\,|)> rem > and B-B*|)> rem > over |]>|]>/|)>>; set min+1|)>>; compute >>|>>|>>>>>\>>|>>|>>>>>-B*,0,x,,\,|)>>>|>>|,0,x,,\,|)>>>>>> rem > over |]>|]>/|)>>; compute \\*+\+\*-t>; update \-*\ rem |)>> computed over |]>|]>/|)>>; for from to , <\indent> update \-*\ rem |)>> computed over |]>|]>/|)>>. For from to , compute \* rem > over |]>|]>/+1>|)>>. Return ,,\,> truncated to order +1>> and then regarded in |]>>. <\proposition> Algorithm is correct. Let \0> be a fixed rational value. If =\>> and if <\equation*> p\max>,|)>*>+1|)>*n*>/8>|)>*>-1|)>, then it takes time *>|)>*i>*k*log p|)>+>*k*log p|)>>. <\proof> This algorithm directly comes from4>. We do not repeat the correctness proof but focus on the complexity analysis in the Turing model for the dense representation of polynomials. The partial evaluations of ,\,f> at =1,x=0,\,x=0> are achieved in time >*k*log p|)>> by Lemma and\Vrecall that the homogeneous polynomials are actually represented by their specialization at =1> and their total degree. All the partial derivatives needed for the Jacobian matrix can be obtained in time >*k*log p|)>> by Lemma. In step2, we compute in time <\equation*> >|)>*i>*k*log p|)>, by Theorem. Then the inversion of is performed by Berkowitz' algorithm in time +1>**k*log p|)>=>*k*log p|)>>. In step3 we regard ,0,x,\,x|)>> and the entries of ,0,x,\,x|)>>> as polynomials in >|]>|]>/+1>|)>>: their evaluations at ,\,|)>> take time *>|)>*i>*k*log p|)>> by Theorem. The remaining computations in steps3 until5 take time*k*log p|)>>. For the intersection step we follow the method designed in6>. We first compute a deformation of the parametrization of > by the primitive element <\equation*> u>\u+e*x+\+e*x where the > are new variables satisfying *e=0> for all . Let \,\,e|)>> and let >\\|]>/|)>> be a shorthand for ,\,e|]>/,\,e|)>>. The extension ,i>> of > to >|)>,\,x|]>> is parametrized by >> as follows <\equation*> ,i>=>|~>>|)>,>>|)>*x-,n-i+1>>|)>,\,>>|)>*x-,n>>|)>|)>, where >\\>|]>> is monic in of degree >, and the ,j>\\>|]>> have degreeD> in , for ,n>. Of course, >,,n-i+1>,\,,n>> respectively coincide with ,,\,> modulo >. In addition we know that the total degrees of >,,n-i+1>,\,,n>>> in > and is D>; see3>. <\specified-algorithm> <\description-compact> The Kronecker parametrization ,,\,> of > and \>. The Kronecker parametrization >,>,,n-i+1>,\,,n>> of ,i>>. ,\,f> satisfy , , , , . <|specified-algorithm> <\enumerate> Compute >\-e*>. Compute \e*|)>/ mod mod -b|)>+1>>. For from to do the following: <\enumerate> compute \/ mod mod -b|)>+1>>; compute ,j>\+*\ mod mod -b|)>+1>|)>>; compute ,j>\>*,j> mod > mod -b|)>+1>>. Return >,>,,n-i+1>,\,,n>> truncated to -b|)>+1>> and regarded in >|]>>. <\proposition> Except for 2*D> values of in >, Algorithm is correct. If =\>>, then it takes time *k*log p|)>>. <\proof> This algorithm corresponds to8>. Its correctness is explained in6.4>. We adapt the complexity analysis to our context. In steps2 and3.a the inverse of > modulo > is only required modulo >. If we were performing this modular inversion over >|)>> then we would have to handle rational functions of degrees growing with >. In order to avoid this coefficient swell we use truncated power series instead of rational functions. In fact, we compute the inverse of> modulo> for > specialized to , then we use classical Hensel lifting to recover mod mod -b|)>+1>>>. This method works fine whenever the resultant of > and > does not vanish at. This resultant has degree 2*D> in >. Algorithm needs to shift > by in the input and by in the output, which can be done in time *k*log p|)>> using a divide and conquer algorithm; see for instance7>. Then mod mod -b|)>+1>> is obtained in time *k*log p|)>>. The rest of the algorithm performs polynomial products of cost *k*log p|)>>. If > is a polynomial in >|]>> of total degree D> in > and , and assuming \0>, then we write <\equation*> P\|\/+e|)>> for the polynomial in >> obtained after replacing > by /+e|)>>. This substitution can be performed efficiently as follows: <\enumerate> Compute >\P\|\y/+e|)>>> with a softly linear number of operations in >; Compute >\|y-t>>. This substitution is applied to each homogeneous component of >> seen in >>. In fact if >> is the homogeneous component of degree then we are led to compute >>, which basically reduces to computing >>, and that corresponds to a univariate polynomial shift. Such a shift is known to take softly linear time, as mentioned above. If =\>> then this substitution in takes time *k*log p|)>>. We are now ready to complete the intersection step. <\specified-algorithm> <\description> The Kronecker parametrization >,>,,n-i+1>,\,,n>> of ,i >> and \>. The Kronecker parametrization ,\,W> of >>. ,\,f> satisfy , , , , . <|specified-algorithm> <\enumerate> Compute >>\>\|\/+e|)>>>. Compute >>\>\|\/+e|)>>> and >>\>> mod >> mod +1>>. For from to compute: <\enumerate> >,j>\,j>\|\/+e|)>>>; >,j>\>>*>,j> mod >> mod +1>>. Let >>\>>>*f,0,+e>,>,n-i+1>,\,>,n>|)> mod >> mod +1>>. Compute >\Res>>,>>|)>/Res>>,>>|)>>=Q-e*W-\-e*W> with ,\,W> in |]>/+1>|)>>. Return \*x+\+\*x,Q,W,\,W> truncated modulo +1>> and seen as polynomials in >. <\proposition> Let be a finite subset of >. If we take ,\,\|)>> in > and then in at random, then Algorithm works correctly as specified with probability 1-6*D/>. Let \0> be a fixed rational value, and assume that >\2>. If =\>>, and if <\equation*> p\max>,|)>*>+1|)>*n*>/8>|)>*>-1|)>, then Algorithm takes time *>|)>*i>*k*log p|)>+>*k*log p|)>>. <\proof> The algorithm is borrowed from7>. Again we only focus on the complexity and probability analyses in our context. First we need to compute >>,>>|)>> along with the corresponding Bézout relation that yields >>\>> mod >>>. We wish to apply a fast subresultant algorithm, namely either the one of11> or the one of. For this purpose, we need to ensure the following properties: <\itemize> the leading coefficient (in degree >) of >>> is non-zero modulo>, the leading coefficient, in degree \D-1>, of >>> is non-zero modulo>, the non-zero subresultant coefficients from degree to -2> are non-zero modulo>. In fact, these coefficients are the only quantities that need to be inverted during the execution of the aforementioned fast subresultant algorithms. We claim that these properties are met for sufficiently generic values of the >. In order to prove and quantify this claim, we introduce <\equation*> u>\\*x+\+\*x where the > are new independent variables, and set >\\,\,\|)>>. The extension ,i>> of > to >|)>,\,x|]>> is parametrized by >> as follows <\equation*> ,i>=>>|)>,>>|)>*x-,n-i+1>>|)>,\,>>|)>*x-,n>>|)>|)>, where <\itemize> >,,n-i+1>,\,,n>> actually belong to ,\,\|]>,t|]>>, >> is monic, separable, and of degree > in , ,n-i+1>,\,,n>> have degree D> in, >,,n-i+1>,\,,n>> have total degree D> in > and, >,,n-i+1>,\,,n>> have total degree D> in ,\,\>. For the proofs of these facts we refer the reader to3>. It follows that >,,n-i+1>,\,,n>> are the respective specializations of >,,n-i+1>,\,,n>> at =\+e,\,\=\+e>. We further introduce <\equation*> >>\>\|\/\>and>>>\>\|\/\>. By the specialization property, the subresultants of >>> and >>>, seen as polynomials of respective degrees >>\D-1> and >>=D> in , are the specializations of those of >>> and >>> at =\+e,\,\=\+e>. Such a subresultant coefficient of>>> and>>> is a specific minor of the Sylvester matrix of>>> and>>>, of size at most -1|)>\-1|)>>. Therefore such a subresultant coefficient is a (non-reduced) rational function with denominator *-1|)>>> and numerator in ,\,\|]>> of total degree 2*D*-1|)>> in ,\,\>. Consequently the product of all these non-zero numerators yields a non-zero polynomial ,\,\|)>>> of total degree <\eqnarray*> |>|*-1|)>*-1|)>\4*D,>>>> such that *\,\,\|)>\0> implies that the -1> non-zero subresultant coefficients of >>> and >>> are invertible; in particular the resultant is invertible and so are the leading coefficients of >>> and >>>. The fast computation of >>,>>|)>> raises the same issue, and requires the same kind of probability analysis. We thus introduce <\equation*> >>\|>|)>>*f,0,x,,n-i+1>|>>,\,,n>|>>|)>|)>|\|>\/\> which belongs to |)>,\,\|]>>, has degree D> in, D> in, and total degree D> in ,\,\>. In this way <\equation*> >>>*f,0,+e>,>,n-i+1>,\,>,n>|)> is the specialization of >>> at =\+e,\,\=\+e>. A subresultant coefficient of >>> and >>>, seen as polynomials of respective degrees >>\D> and >, is a (non-reduced) rational function with denominator *D>> and numerator in ,\,\|]>> of total degree 4*D*D> in ,\,\>. Consequently there exists a non-zero polynomial ,\,\|)>> of total degree <\eqnarray*> |>|D+deg>>*>>|)>+deg>>*>>|)>>>||>|D+2*D+2*D>>||>|D,>>>> such that *\,\,\|)>\0> implies that all the non-zero subresultant coefficients of >>> and >>> and the leading coefficients of >>> and >>> are invertible. As in the proof of Lemma, there exists a non-zero polynomial ,\,\|)>> of total degree 2*D> such that ,\,\|)>\0> implies that is a primitive element for>>. This ensures the correctness of the output. Now assume that *\,\,\|)>*\,\,\|)>*\,\,\|)>\0>. This happens with probability <\equation*> \1-+6*DD+2*D|>\1-|>, whenever ,\,\|)>> is taken at random in >. The subresultant coefficients of>>> and >>> have degree D*-1|)>> in, so except for |D*-1|)>\D/4|\>> values of , we can compute >>,>>|)>> modulo +1>>> in time *D*k*log p|)>>, by means of the fast subresultant algorithm. A subresultant coefficient of >>> and >>> has degree 2*D*D> in, so except for 2*D*D+D+D\2D> values of , >>,>>|)>> can be computed by means of the fast subresultant algorithm modulo +1>> in time *D*k*log p|)>>. Consequently asuitable value for is found at random in with probability of success <\equation*> \1-|>. \ All the substitutions \/+e|)>> take *k*log p|)>> time according to the above discussion. All the shifts by and in totalize time +D|)>*k*log p|)>>. The steps3.b take time *D*k*log p|)>> in total. In step4 we first specialize the variables ,\,x> to ,0> in time >*k*log p|)>>, after which the rest of the evaluation of <\equation*> f,0,+e>,>,n-i+1>,\,>,n>|)> mod >> mod +1> is decomposed into evaluations in \\>,y,t|]>/,>,>>|)>> for ,n>>. More precisely, for the evaluation in > we ignore the variables > for j> and we may thus appeal to Theorem in order to achieve time <\equation*> *>|)>*i>*k*log p|)>. Putting together the propositions of this and the previous sections, we obtain the following complexity bound. <\theorem> Let \0> be a fixed rational value, let =\>> and let ,\,f> be homogeneous polynomials in ,\,x|]>> of total degrees ,\,d> such that: <\itemize> , , and are satisfied; \max>,|)>*>+1|)>*n*>/8>|)>*>-1|)>,100*D|)>>. Then an invertible matrix , a primitive element and a Kronecker parametrization of \N+-1|)>>> can be computed in time *>|)>*>*k*log p|)>>, with a probability of failure that is bounded by1/2>. <\proof> This result is a consequence of the Kronecker solver that was recalled at the beginning of this section. First we pick up an invertible matrix of size \> with entries at random in >> as described in Lemma, in order to ensure that after replacing ,\,f> by \N,\,f\N> conditions and hold with a probability at least <\equation*> 1-5*D/p\1-1/40. Computing \N,\,f\N> takes time >*k*log p|)>=*>*k*log p|)>> by Proposition. On the other hand the probability that ,\,\> are suitable for all the intersection steps is at least <\equation*> 1-|p>\1-12*D/p\1-1/8, by Proposition. The probability that a random value of is suitable for all stages of the solver is at least <\equation*> 1-+2*D|p>\1-14*D/p\1-1/7, by Propositions and.\ Overall the probability of failure is bounded by . Notice that the constant is not intended to be optimal. The total complexity follows by combining Propositions, and. <\corollary> Let \0> be a fixed rational value, let =\>> and let ,\,f> be homogeneous polynomials in ,\,x|]>> of total degrees ,\,d> such that , , and are satisfied. Then the Kronecker algorithm computes an integer >|)>>, the finite field >>, the images of ,\,f> into >,\,x|]>>, an invertible matrix with entries in >>, a primitive element \>> and a Kronecker parametrization of ,\,f|)>\N+-1|)>> over >> in time *>|)>*>*k*log p|)>> with any arbitrarily small fixed probability of failure, or in expected time *>|)>*>*k*log p|)>> with a Las Vegas algorithm. <\proof> The case <\equation*> p\B\max>,|)>*>+1|)>*n*>/8>|)>*>-1|)>,100*D|)>. is already handled in Theorem, and corresponds to . So we focus on the opposite case \B>, for which >|)>>. We compute the first integer >> such that <\equation*> p>>\B, that is <\equation*> >\||\>=O>|k*log p>|)>. Then we set <\equation*> l\mini\\,i*k+1\>|)>, so is coprime to and we have >|)>|)>>. Now we appeal to Lemma: we construct >> in time *log p+l+2>*k*log p|)>> with probability of failure 1/4>; then the casts between >> and >> can be achieved in time *k*log p|)>>. With this construction at hand we proceed as follows: <\itemize> We cast ,\,f> into >,\,x|]>> in time >*l*k*log p|)>=>*k*log p|)>>; We call the Kronecker solver over >>. In view of Theorem, the resolution over >> takes time *>|)>*>*k*log p|)>> with a probability of failure 1/2>. Using Proposition, the correctness of the result can be verified in time >|)>*n>*k*log p|)>> as well. We are thus able to reach any arbitrarily small fixed probability of failure by repeating the resolution sufficiently many times. Altogether, this leads to a Las Vegas algorithm that satisfies the claimed expected complexity bound\Vof course, > needs to be replaced by /3> from the outset to conclude the proof. <\remark> If some of the > are , then it is far more efficient to compute: <\itemize> a permutation > of ,n|}>>, \>, linear forms ,\,l\\*y+\+\*y>, homogeneous polynomials ,\,g>\\,\,y|]>> of total degrees 2>, such that >,\,l|)>=0> for ,t> and ,\,y|)>=f>,\,l|)>> for ,n-t>>. In this way we are reduced to solve a \Psmaller\Q system with all degrees 2>. This simplification in the input system can be achieved in time >+>|)>*k*log p|)>> by means of Proposition. We leave out the details for the sake of conciseness. From now we turn to solving polynomial systems =\=f=0> with coefficients in>, still under assumptions , , and . We still write ,\,f|)>>, and we appeal to the algorithm designed in4.6>: we first perform the resolution modulo asuitable prime number , and then lift the parametrization of the solution set over the adic numbers by means of Newton\UHensel lifting. Roughly speaking, once the bit size exceeds twice the maximum bit size of the integers of the Kronecker parametrization over>, then this parametrization can be reconstructed from its -adic expansion. In order to formalize this approach, we first need to estimate the bit size of the Kronecker parametrizations in terms of the input size. Then we design a probabilistic algorithm that generates suitable values for with high probability. For a scalar, a vector, a matrix, or a polynomial over > we write |A|\<\|\|\>>>> for the maximal norm of its entries or coefficients. Then the height of , written , is defined as <\equation*> ht A\max|A|\<\|\|\>>>|)>. Until the end of the text we set <\equation*> H\|)>|d>. -adic Hensel lifting> In section we have described an algorithm for lifting power series in a Kronecker parametrization. In the following paragraph we adapt the method to lift -adic integers instead. The algorithm originates from4.6>. The Jacobian matrix of ,\,f> in ,\,x> is still written <\equation*> J\ f|\ x>>|>| f|\ x>>>|>||>>| f|\ x>>|>| f|\ x>>>>>>. <\specified-algorithm> <\description-compact> ,\,f\\,\,x|]>> and the parametrization ,\,v\/p*\|)>> of ,\,f|)>>+-1|)> mod p> by *x+\+\*x>; an integer \0>. The Kronecker parametrization ,,\,> in /p>*\|)>> of ,\,f|)>+-1|)>> mod p>> by . ,\,f> satisfy , , , , . <|specified-algorithm> <\enumerate> Initialize \1>, \q>, \v>,\, \v>. Initialize with the inverse of J,\,v|)>> modulo >. While \\> do the following: <\enumerate> update J,\,|)> rem > and B-B*|)> rem >, over /p>*\>; set \min,\|)>>; compute >>|>>|>>>>>\>>|>>|>>>>>-B*,\,|)>>>|>>|,\,|)>>>>>> rem > over /p>*\>; compute \\*+\+\*-t> over /p>*\>; replace \-*\ rem |)>> computed over /p>*\>; for from to , <\indent> replace \-*\ rem |)>> computed over /p>*\>. For from to , compute \* rem > over /p>*\>. Return ,,\,>. <\proposition> Algorithm is correct. Let \0> be a fixed rational value. If <\equation*> p\max>,|)>*>+1|)>*n*>/8>|)>*>-1|)>, then Algorithm takes time >|)>*n>*\*log p+>*>+1|)>|)>>, where >\max,\,ht f|)>>. <\proof> This algorithm directly comes from4.6>. We do not repeat the proof of the correctness but focus directly on its bit complexity for the dense representation of input polynomials. Before all we may compute the partial derivatives of the > in time >*>+\*log p|)>|)>> by Lemma to any precision >. In step2, we compute in time <\equation*> >|)>*n>*log p|)>, by Theorem. Then the inversion of modulo is performed by Berkowitz' algorithm in time +1>**log p|)>=>*log p|)>>. In step3 the evaluations of ,\,f> and > at ,\,|)>> at a given precision> take time >|)>*n>*\*log p+>*\*log p|)>>, again in view of Theorem. Since we keep doubling> until we reach the precision >, the total cost is bounded by >|)>*n>*\*log p|)>>. The rest of the computations in steps3 and4 take time >*D*\*log p|)>=*\*log p|)>>. Let be a zero dimensional subvariety of >. \PThe\Q of is the polynomial \\,\,\|]>> that satisfies <\equation*> \,\,\|)>=0\V\\>*x+\*x+\+\*x|)>\\, where the > are taken in >. The Chow form is defined up to a scalar factor in >, and is homogeneous of total degree . From now we assume that is the variety of N\\N,\,f\N|)>>, where is an invertible \> integer matrix such that N> is in Noether position. In this setting, is included in the affine subspace > defined by =1>, and we have =D>>. We then define the > of > as the >-multiple of > having the coefficient of >> equal to. The normalized Chow form belongs to ,\,\|]>>. In order to bound bit sizes of Kronecker parametrizations of N>, we first need to bound the bit size of the coefficients of >. For this purpose we follow a construction due to Philippon, and introduce <\equation*> \\m;S|)>+D* , where ;S|)>> denotes the >-Mahler measure of > defined as <\equation*> m;S|)>\>log |\,\,\|)>|>*\,\,\|)> where > is the usual Haar measure of mass 1 over the sphere <\equation*> S\,\,z|)>\\\|\|>+\+|\|>=1|}>. So ;S|)>> is to be understood as a real >-dimensional integral. On the other hand, the usual Mahler measure of > is defined as an -dimensional complex contour integral <\equation*> m|)>\\log *\*t>,\,\*\*t>|)>|\|>*\t*\*\t. The relationship between these two measures is given in4>: <\equation*> 0\m|)>-m;S|)>\D*, which implies <\equation> m|)>\\. Then we appeal to2.10, p.553> which ensures the existence of a rational number such that > has all its coefficients in > and <\equation> log +\\\N|)>|d>+2*n*log|)>*D. To relate the Mahler measure to the actual bit size of > we use 1.1, p.529>: <\equation> |)>-log |\|\<\|\|\>>>|\|>\log*D. Combining the three latter inequalities leads to <\eqnarray*> |)>=log |c*\|\<\|\|\>>>>|>|+log |\|\<\|\|\>>>>>||>|+m|)>+log*D)>>>||>|+\+log*D)>>>||>|\N|)>|d>+2*n*log|)>*D+log*D)>>>||>|\N|)>|d>+*log|)>*D.>>>> Let ,\,w> be a Kronecker parametrization of N+-1|)>> by the primitive element *x+\+\*x> taken in ,\,x|]>>. It relates to the Chow form > of N> as follows: <\eqnarray*> >||,\,-\|)>,>>|>||\|\\>,\,-\|)>i=1,\,n.>>>> Since > belongs to ,\,\|]>>, all the coefficients of and of > for ,n> also belong to >, and their bit size can be bounded as follows. By \2> and inequality, the Chow form > admits at most +n|n>\*D> terms, whence <\eqnarray*> |c*q|\<\|\|\>>>>|>|*D*|c*\|\<\|\|\>>>*|\|\<\|\|\>>>>,>>||c*q|\<\|\|\>>>>|>|*D*|c*\|\<\|\|\>>>*|\|\<\|\|\>>>>,>>||c*w|\<\|\|\>>>>|>|*D*|c*\|\<\|\|\>>>*|\|\<\|\|\>>>>i=1,\,n,>>>> where |\|\<\|\|\>>>\1> is a shorthand for |,\,\|)>|\<\|\|\>>>>. Now we estimate how the coefficients of increase because of the change of variables .\ <\lemma> Let \,\,x|]>> be homogeneous of total degree 2>, and let be a >\> matrix over >. Then we have <\equation*> |f\N|\<\|\|\>>>\*d!*d*|f|\<\|\|\>>>*|N|\<\|\|\>>>. <\proof> The proof is immediate by using |+\+x|)>|\<\|\|\>>>\d!> and inequality. <\lemma> For all 1> we have \2*d*log d>. <\proof> This is a well known unconditional variant of Stirling's formula: <\equation*> log\log t*\t=*log--2*log 2+2\2*d*log d. <\lemma> Assume that , , and hold, and let be an \> matrix over > such that \N,\,f\N> is in Noether position. Then we have <\equation*> ht\N|)>\ht|)>+d*log |N|\<\|\|\>>>+4*n*d*log d,i=1,\,n. In addition, for a Kronecker parametrization ,\,w> of \N,\,f\N|)>+-1|)>>, and with as above, then we have <\eqnarray*> |>|,ht|)>,ht|)>,\,ht|)>|)>>>||>|+n*log |N|\<\|\|\>>>+log |\|\<\|\|\>>>+16*n*log D|)>*D.>>>> <\proof> From Lemmas and we obtain <\eqnarray*> \N|)>>|>||)>+d*log |N|\<\|\|\>>>+log*d!*d|)>>>||>||)>+d*log |N|\<\|\|\>>>+4*n*d*log d.>>>> Consequently, <\eqnarray*> \N|)>|d>>|>||)>+d*log |N|\<\|\|\>>>+4*n*d*log d|d>>>||>|+n*log |N|\<\|\|\>>>+4*n*log D.>>>> Combining this bound with\U and using \D>, we deduce <\eqnarray*> |>|+*log D+log |c*\|\<\|\|\>>>+D*log |\|\<\|\|\>>>>>||>|+*log D+\N|)>|d>+*log|)>*D+D*log |\|\<\|\|\>>>>>||>|+n*log |N|\<\|\|\>>>+log |\|\<\|\|\>>>+4*n*log D+*log|)>*D.>>>> Finally we use 2\D> to bound *log> by >, which concludes the proof. /x|2|20>|>> Given an input polynomial system =\=f=0> satisfying , , and , and having all its coefficients in >, we are led to characterize prime numbers for which the system admits > isolated simple roots when considered modulo . We will rely on the verification criterion from Proposition, in order to ensure that a resolution modulo can be lifted over the rational numbers. <\lemma> For all 1> we have +z|)>|)>\*log k>. <\proof> The inequality is an equality for . For 1> we have <\equation*> |+z|)>|\<\|\|\>>>\k*|+z|)>|\<\|\|\>>>, whence |+z|)>|\<\|\|\>>>\k> by induction on . <\lemma> Let be a n> matrix over the real numbers, then we have \|M|\<\|\|\>>\|M|\<\|\|\>>\n*|M|\<\|\|\>>>>, where > represents the -th column vector of . <\lemma> Let ,\,f> be homogeneous polynomials in ,\,x|]>> which satisfy , , and. There exists a positive integer > such that <\equation*> ht \\n*>*+77*n*log D|)>*D, and for any that does not divide > the sequence ,\,f> satisfies , and over /p*\>.\ <\proof> By Lemma there exists an invertible \> matrix with integer entries such that |N|\<\|\|\>>>\5*D> and \N,\,f\N> is in Noether position. On the other hand by Lemma there exists a primitive element *x+\+\*x> for \N,\,f\N|)>>> with integer coefficients and such that |\|\<\|\|\>>>\2*D>. Let ,\,w> represent the corresponding Kronecker parametrization, and let and > be as above. Let > be a non-zero coefficient in > of a term of degree > for ,n>, let ,\,x|]>|)>n>> be the Jacobian matrix of \N,\,f\N> with respect to ,\,x>, and <\eqnarray*> >|>||\|>=|)>|\|>,>>|>|>|,c*w,\,c*w|)>,c*q|)>|\|>,>>|>|>|*\*\*\*\.>>>> By construction, >, > and thus > are positive integers. Now if does not divide >, then mod p|)>=d> for ,n>, and the system =\=f=0> admits > isolated regular roots modulo . Consequently ,\,f> satisfyR>, R>, and modulo by Proposition. To conclude the proof it remains to bound >. First, we may further rewrite a bound for the quantity introduced in Lemma, by making use of \2>: <\eqnarray*> |>|+n*log |N|\<\|\|\>>>+log |\|\<\|\|\>>>+16*n*log D|)>*D>>||>|+n*log|)>+log|)>+16*n*log D|)>*D>>||>|+n*log|)>+log|)>+16*n*log D|)>*D>>||>|+24*n*log D|)>*D.>>>> By applying Lemma on the Sylvester matrix of and >, we obtain <\eqnarray*> >|>||c*q|\<\|\|\>>-1>*|c*q|\<\|\|\>>>\+1|)>>*|c*q|\<\|\|\>>>-1>*|c*q|\<\|\|\>>>>,>>>> whence <\eqnarray*> >|>|*log+1|)>+2*K*D>>||>|*log D+2*+24*n*log D|)>*D>>||>|+25*n*log D|)>*D.>>>> As to >, we deduce from Lemma that <\eqnarray*> \N|)>>|>||)>+d*log |N|\<\|\|\>>>+4*n*d*log d>>||>||)>+d*log|)>+4*n*d*log d>>||>||)>+2*d*log D+7*n*d*log d>>||>||)>+9*n*d*log D.>>>> On the one hand, we have <\equation*> deg*\N|)>|\x>,c*w,\,c*w|)>|)>\-1|)>*-1|)>, hence <\equation*> deg,c*w,\,c*w|)>|)>\n*>-1|)>*-1|)>. On the other hand, Lemma gives us <\eqnarray*> |)>>*|)>>*\*|)>>|)>>|>|+\+e|)>*|)>.>>>> From Lemma and *-1+n|n>=d*|d+n>*+n|n>\*d>, it follows that <\eqnarray*> ||\N|)>|\x>,c*w,\,c*w|)>|)>>>||>|*-1+n|n>|)>+ht\N|)>+-1|)>*|)>>>||>|*d|)>+ht|)>+9*n*d*log D+-1|)>*+24*n*log D|)>*D+log D|)>>>||>|*log D+d*+25*n*log D|)>*D>>||>|*+37*n*log D|)>*D.>>>> Let > denote the group of permutations of ,n|}>>. Then we have <\eqnarray*> ,c*w,\,c*w|)>|)>>|>|\\>sgn|)>*\N|)>|\x>>,c*w,\,c*w|)>|)>>>||>|+d*+37*n*log D|)>*D>>||>|>*+38*n*log D|)>*D.>>>> Again by Hadamard bound we deduce that <\eqnarray*> >|>||c*q|\<\|\|\>>>-1|)>*-1|)>>*|det J,c*w,\,c*w|)>|\<\|\|\>>>>>||>|+1|)>>-1|)>*-1|)>/2>*|c*q|\<\|\|\>>>>-1|)>*-1|)>>>>|||\>-1|)>*-1|)>+1|)>/2>*|det J,c*w,\,c*w|)>|\<\|\|\>>>>,>>>> whence <\eqnarray*> >|>|>-1|)>*-1|)>**log+1|)>++24*n*log D|)>*D|)>>>|||+D**log>-1|)>*-1|)>+1|)>+n*>*+38*n*log D|)>*D|)>>>||>|>*+25*n*log D|)>*D+n*>*+39*n*log D|)>*D>>|||>*+64*n*log D|)>*D.>>>> The conclusion follows from adding right-hand sides of,, and |)>+\+ht|)>\>*H>. <\remark> Theorem2.1 of provides a more general statement than Lemma for the affine case. Extensions to the quasi-projective case can be found in. <\lemma> Assume that \2> for ,n>. Let be an invertible \> matrix over > and let be a linear form in ,\,x|]>>. Let be a prime number, and let ,\,w> be polynomials in > such that: <\itemize> is monic of degree > and \D> for ,n>; is separable modulo ; ,\,w|)>=t*q mod |)>>; \w\\\w|)>> is a root of \N=\=f\N=0> modulo |)>>; The value of the Jacobian matrix of \N,\,f\N> in ,\,x> at \w\\\w|)>> is invertible modulo |)>>. Then \N,\,f\N> satisfy , , and are in Noether position (over >), is a primitive element for \N,\,f\N|)>+-1|)>>, and ,\,w> coincide modulo with the Kronecker parametrization of \N,\,f\N|)>+-1|)>> over > by. <\proof> By Proposition, the residues modulo of ,\,w> form a Kronecker parametrization of \N,\,f\N|)>+-1|)>>. We may use Algorithm from atheoretical point of view in order to prove the existence of univariate polynomials ,\,W> over the -adic numbers >, satisfying the following properties: <\itemize> is monic of degree > and \D> for ,n>; ,\,W> have nonnegative valuations in and coincide with ,\,w> modulo; ,\,W|)>=t*Q mod Q>; \W\\\W|)>> is a root of \N=\=f\N=0> modulo >; The value of the Jacobian matrix of \N,\,f\N> in ,\,x> at \W\\\W|)>>> is invertible modulo >. Consequently, Proposition ensures that ,\,W> is a Kronecker parametrization of \N,\,f\N|)>+-1|)>> over > by the primitive element. By uniqueness, it coincides with the image over > of the Kronecker parametrization of \N,\,f\N|)>>+-1|)>> over > by. Using again Proposition with the parametrization over the rationals concludes the proof. Now we address the question of obtaining a suitable prime number. <\lemma> There exists a probabilistic algorithm which, given positive integers > and such that \B>, computes a prime in the range ,2*B|}>>, which does not divide >, with probability of success 1/2>, and in time > B>. <\proof> This is a consequence of18.10, part(i)>: we just appeal to the deterministic primality test designed in in order to ensure that is always prime in return. This test works in polynomial time. The idea behind the Kronecker solver over > is as follows. For a suitable prime , we use the Kronecker algorithm to solve the input system over /p*\>. If the algorithm finishes with success, then we appeal to the verification criterion of Lemma. If the criterion passes, then we may lift the Kronecker parametrization over the adic integers to arbitrary precision > by using Algorithm, and we know that the adic parametrization is the image of the rational one. Consequently, as soon as > is sufficiently large, we can reconstruct the parametrization over >. Let us recall that a rational number , with =1> and 0>, is uniquely determined by its adic expansion to precision > whenever \>/2||0em||>>> and \>/2||0em||>>>; see for instance4>. Using the fast Euclidean algorithm from5.26> this reconstruction takes *log p|)>*log*log p|)>|)>=*log p|)>>. The choices of and the precision > needed by the solver are made precise in the following algorithm. We recall that > is a fixed positive rational number, thought to be small. <\algorithm> <\description> ,\,f> homogeneous in ,\,x|]>>. An invertible \> matrix over > such that \N,\,f\N|)>> is in Noether position, a linear form \,\,x|]>>, and a Kronecker parametrization ,\,w> by of \N,\,f\N|)>+-1|)>>. ,\,f> satisfy , , and . <\enumerate-numeric> For each from to , compute the maximum bit size > of the coefficients of >. Compute > and a positive integer > such that -2\|d>|)>/log 2\>. Compute \> such that <\eqnarray*> |\max>|||>>|>,|)>*>+1|)>*n*>/8>|)>*>-1|)>,>>|||,>>|||>*+77*n*log D|)>*D||>\B.>>>> Compute a prime number at random in ,2*B|}>> Lemma. Call the Kronecker algorithm underlying Theorem with the images >,\,>> of ,\,f> in /p*\|)>,\,x|]>>. If the resolution fails then go to step2. Let >,>,>,>,\,>> be the data returned by the Kronecker algorithm in step3. If >\D>, or if >> is not separable, or if >>,\,>|)>\t*> mod >>, or if one of the >\>> does not vanish at >\>\\\>|)>> modulo >>, or if the value of the Jacobian matrix of >\>,\,>\>> in ,\,x> at >\>\\\>|)>> is not invertible modulo >>, then go to step2. Let (resp. ) be the canonical preimage of >> (resp. of >>) with entries in ,p-1|}>>. Let > be the first integer such that <\equation*> log >>\+*log p+16*n*log D|)>*D. Compute \N,\,f\N> over /p>*\>, and use Algorithm to obtain the Kronecker parametrization ,,\,> by of \N,\,f\N|)>+-1|)>> over /p>*\>. By rational reconstruction, compute the unique polynomials ,\,w> in > that coincide with ,,\,> modulo >>.\ Notice that > is whenever all the > have their coefficients in >. Therefore in the following complexity estimates we shall write +1> in order to make complexity bounds valid in this particular case. <\theorem> Algorithm is correct, as a Las Vegas algorithm, and takes expected time <\equation*> *>|)>**n>*+1|)>|)>. <\proof> By Lemma the prime determined in step2 does not divide the integer > defined in Lemma with probability 1/2>. In other words, the sequence >,\,>> satisfy properties , , and with probability 1/2>. Thanks to B>, Theorem ensures that the Kronecker solver modulo succeeds with probability 1/2>. It follows that the expected number of times that the algorithm repeats steps2 to4 is bounded. Once the algorithm passes step4, Lemma ensures that the parametrization modulo may be lifted to any arbitrary precision, and that the parametrization over> may be recovered whenever the precision is sufficiently large. According to the preceding discussion on rational reconstruction, we need that <\equation*> log >>\K, with defined in Lemma; it satisfies <\equation*> K\+n*log |N|\<\|\|\>>>+log |\|\<\|\|\>>>+16*n*log D|)>*D, with |N|\<\|\|\>>>> and |\|\<\|\|\>>>> being p\2*B>. This concludes the proof of correctness. The computation of > needs time *h|)>=*>*+1|)>|)>> by Lemmas and. Then >, > and can be computed in time >, which is negligible since <\equation*> log B=O>+log+1|)>|)>=O+log+1|)>|)>. The cost of step2 is > B> by Lemma, which is again negligible. By Theorem step3 takes time <\equation*> *>|)>*>*log p|)>=*>|)>*>*log+1|)>|)> since >. Proposition then provides us with the cost <\equation*> >|)>*n>*log p|)>=*>|)>*n>*log+1|)>|)> for step4. Computing \N,\,f\N> over /p>*\> in step5 takes >*\*log p|)>> by Proposition. The cost of the -adic Hensel lifting is given in Proposition: <\equation*> >|)>*n>*\*log p|)>+>*>|)>=>|)>*n>*\*log p|)>+*>*+1|)>|)>, where >=max,\,h|)>=O>*+1|)>|)>>. We verify that <\equation*> \*log p=O+*log p+16*n*log D|)>*D|)>=*+1|)>|)>. Step6 requires additional time **log p|)>=*+1|)>|)>>. The version of the Kronecker solver summarized in Algorithm can be optimized by using ideas previously developed for the case of input polynomials given by straight-line programs; see implementation details in the library of . A first optimization consists in minimizing the height of the parametrization over the rationals. For this purpose it is worth finding a matrix and a primitive element of small height. This search may be achieved efficiently modulo , and we refer the reader to for the description of the underlying algorithms. A second optimization concerns the early termination of the lifting stage over the adic integers. In fact one may try to perform rational reconstruction before the bound on the precision is actually reached. If all the reconstructions succeed, then it suffices to verify that the Kronecker parametrization actually describes the solution set. A probabilistic method to do this consists in evaluating the equations at the parametrization modulo an independent random prime number > with a bit size similar to the one of. This verification may also be done deterministically over the rationals, but at ahighercost. For an input system in variables of degrees =2>, the quantity |)>> is of the order . Therefore with , Algorithm involves primes of bit size close to 52 bits. In fact, the bounds used in the design of Algorithm have been simplified and overestimated for the sake of the presentation. Sharper bounds should be considered in practice, in order to ensure that fits into 32 bits for small and moderate systems, and 64 bits for larger ones. Let us finally mention that the present bound for the bit size of the Kronecker parametrization over the rationals turns out to be essentially sharp for generic systems, according to 3.1>. Consequently the complexity reached within Theorem is nearly linear in the \Pexpected\Q size of the output, from the asymptotic point of view. This fact is illustrated by the following example. <\example> Consider the system, adapted from, <\equation*> |>||-x>>|>||*x-x>>||>|>|>||*x-x>>|>||*x-a*x,>>>>> where 2> and \0>>. Let > be a root of <\equation*> x>-a>=0, and set <\equation*> \\>,\\|\>,\,\\|\>, so that \\\\\|)>> is clearly a zero of =\=f=0>. For ,1>, we observe that <\equation*> |\>=>|\>>. It follows that \\\\\|)>> is also a zero of >. The value of the Jacobian matrix of ,\,f>> in ,\,x> at \\\\\|)>> is <\equation*> A\|||>||||>>|>|>|||*\*\>>||>|>||>>|||>|>|*\*\>>||||>|*\*\>>>>>. By expanding with respect to the last column, we obtain <\eqnarray*> ||*d*\>+**\*\*|)>*\*|)>*|)>>>|||*d*\>+*\>**d*|\>*|\>|)>*\*|\>|)>>>|||*\>**d*>|\>>*>|\>>|)>*\*>|\>>|)>|)>>>|||*\>*d*>|\>>*-d>|\-d>>|)>|)>>>|||*\>**d*>|\>>|)>>>|||*\>**d|)>>>|||*\>*d.>>>> Consequently is invertible. From <\equation*> |\>=>|\>>=>|a>>*>|\>>=-d>|a-d>>, Proposition thus ensures that ,\,f|)>> satisfies R> and R>, that it is in Noether position, and that it is parametrized by the primitive element >, as follows: <\equation*> ,\,f|)>+-1|)>=>-a>,x--d+1>|a-d>>,\,x--d+1>|a-1>>|)>. The Kronecker solver was already known to be competitive both in theory and practice for polynomial systems represented by straight-line programs; see for instance timings in. Under suitable genericity assumptions and for suitable ground fields, we have shown in this paper that this solver also leads to improved probabilistic asymptotic complexity bounds for the dense representation of input polynomials. Our main results concern finite fields and rational numbers, but our methods can in principle be extended to any ground field over which fast multivariate multi-modular composition is available. Unfortunately, it is still a major open problem to design an algorithm for modular composition over arbitrary ground fields. Another major problem concerns the practical efficiency of algorithms for modular composition. This holds in particular for the recent fast algorithms for finite fields designed by Kedlaya and Umans, and revisited in; we refer the reader to the concluding section of for a quantitative discussion. Consequently we do not expect our Theorem to be of practical use, and we cannot claim our new main complexity bounds for polynomial system solving to be relevant in practice. Nevertheless, it is important to point out that Theorem uses modular composition as a blackbox. Whenever faster algorithms for this task become available, they can directly be used in our algorithms. As one possible research direction, it is worthwhile to investigate the replacement of generic modular compositions by specific ones, such as those studied in and which feature more promising performances. The Kronecker solver admits different extensions for when genericity assumptions and are no longer satisfied: see for instance, and more references in. In this case, the system might have an infinite number of solutions, and one may express them in terms of a union of equidimensional or irreducible algebraic varieties. We expect that these extensions could be revisited for the dense representation of input polynomials, and plan to investigate the details in future work. Yet another important question for systems over the rational numbers concerns the complexity of finding numerical approximations of the roots. In generic cases we have proved in Theorem how to compute Kronecker representations of solution sets in quasi-optimal time. This implies that numerical roots may also be computed in quasi-linear time but whenever the requested precision is \Psufficiently large\Q, by means of fast univariate polynomial solvers. It would be interesting to quantify the latter \Psufficiently large\Q, and to compare to the state of the art of numerical polynomial solvers. This appendix is devoted to subjecting a multivariate polynomial to a linear change of variables. More precisely, given \,\,x|]>> and an n> matrix |)>i\n,1\j\n>> over a commutative ring >, then we wish to compute <\equation*> N|)>,\,x|)>\f*x+\+N*x,\,N*x+\+N*x|)>. The fast algorithms that we propose below do not seem to be available in the literature. They are suitable for any coefficient ring with sufficiently many elements, and they are also well suited for homogeneous polynomials. In this subsection we focus on the algebraic model (computation trees for instance), we let > be an effective commutative ring, and > is a cost function such that two polynomials in \>> may be multiplied with cost |)>>. The evaluation of a multivariate polynomial at points in a block of points >, where is a finite subset of >, is usually achieved by the successive use of fast univariate evaluations, as recalled in the following lemma. <\lemma> Let \1>, let \,\,x|]>> be of partial degree \> in > for ,n>, and let be a subset of > of cardinality >. Then, all the values of at > can be computed with *|)>*log \|)>> arithmetic operations in >. <\proof> We interpret \,\,x|]>> as a univariate polynomial in >, <\equation*> f,\,x|)>=f,\,x|)>+\+f-1>,\,x|)>*x-1>. We evaluate ,\,-1>>> at > recursively. Then, for each ,\,\|)>\S>, we evaluate ,\,\,x|)>> at all the points of , with a total cost *|)>*log \|)>>. Denoting by |)>> the cost of the algorithm in terms of operations in >, we thus obtain <\eqnarray*> |)>>||*|)>+O*|)>*log \|)>.>>>> By induction over , it follows that <\eqnarray*> |)>>||*|)>*log \|)>,>>>> which implies the claimed bound. The next lemma, also well known, concerns the corresponding interpolation problem.\ <\lemma> Let \1>, let ,\,\>> be pairwise distinct points in > such that -\> is invertible whenever j>, let ,\,i>> be a family of values in > for ,\,i|)>> running over ,\|}>>. The unique polynomial \,\,x|]>> of partial degrees\> and such that >,\,\>|)>=\,\,i>> for all ,\,i|)>\,\|}>> can be computed with *|)>*log \|)>> arithmetic operations in >, including inversions. <\proof> Again we interpret \,\,x|]>> as a univariate polynomial in >, <\equation> f,\,x|)>=f,\,x|)>+\+f-1>,\,x|)>*x-1>. For all ,\,i|)>\,\-1|}>> we interpolate the values >,\,\>|)>,\,-1>>,\,\>|)>>> with *O|)>*log \|)>> operations in >. We then recursively interpolate ,\,f-1>> and form as in(). The total cost is obtained as in the proof of the previous lemma. The aim of the following proposition is the fast evaluation of at a set of points of the form |)>+B>, for any matrix and any vector . <\proposition> Let \1>, let \,\,x|]>> be of partial degree \> in > for ,n>, let ,\,\>|}>> be a subset of > of cardinality > such that -\> is invertible whenever j>, let be a n> matrix over >, and let \>. Let be the column vector with entries ,\,x>. If anLU-decomposition of is given, then |)>+B|)>> and > can be computed with *|)>*log \+n>|)>> arithmetic operations in >, including inversions. <\proof> We write ,\,\|)>>. We first assume that |)>i\n,1\j\n>> is upper triangular, and we partition |)>+B> into <\eqnarray*> |)>+B>||>N\|}>|)>+B>>|||>|)>+|)>\*\+\|}>,>>>> where \|)>i\n-1,1\j\n-1>> and \\*>>|>>|>>>>>+>>|>>|>>>>>>. We compute <\equation*> g,\,x|)>\f,\,x,N*\+\|)> for ,\> using *|)>*log \|)>> operations in >. For ,\>, we then evaluate ,\,x|)>> at |)>+> by induction. The base case takes constant time >. Consequently, for any , the total number of operations in > is *|)>*log \|)>>, by the same argument as in the proof of Lemma. We recover ,\,x|)>+B|)>> with *|)>*log \|)>> operations in > by Lemma. If is lower triangular then we may revert of the variables in and the columns of in order to reduce to the upper triangular case. Alternatively, we may adapt the latter decomposition of the set of points, as follows: <\eqnarray*> |)>+B>||>N|}>\S|)>+B>>|||>*\+\|}>\|)>+|)>,>>>> where \|)>i\n,2\j\n>> and \\*>>|>>|>>>>>+>>|>>|>>>>>>. So we compute <\equation*> g,\,x|)>\f*\+\,x,\,x|)> and evaluate ,\,x|)>> at |)>+> by induction, for ,\>. Finally if is general, then it suffices to use the given LU-decomposition, where is lower triangular with on the diagonal, and is upper triangular. In fact we have |)>+B|)>=L|)>|)>+L*B|)>>, so we compute L> and then L|)>|)>+L*B|)>>> and L|)>*B|)>>. In the next lemma the same technique is adapted to homogeneous polynomials. <\lemma> Let \,\,x|]>> be homogeneous of degree 1>, let be a \> matrix over >, and let ,\,\|}>> be a subset of > of cardinality such that -\> is invertible whenever j>. If an LU-decomposition of is given, then N> can be computed with **log d|)>> arithmetic operations in >. <\proof> Assume first that |)>i\n,0\j\n>> is lower triangular and let \|)>i\n,1\j\n>>. We are led to compose ,x,\,x|)>> with <\equation*> *>>|>>|>>>>>+>>|>>|>>>>> by means of Proposition. If is upper triangular then it suffices to revert the variables ,\,x> in , and the columns of , in order to reduce to the lower triangular case. Alternatively, we may set \|)>i\n-1,0\j\n-1>> and compose ,\,x,N|)>> with <\equation*> *>>|>>|>>>>>+>>|>>|>>>>>, in order to obtain N|)>,\,x,1|)>>. Finally, for any , it suffices to use the given LUdecomposition. <\proposition> Let \,\,x|]>> be homogeneous of degree 2>, let be an >\>>> matrix over >, and let ,\,\|}>> be a subset of > of cardinality > such that -\> is invertible whenever j>. If an LU-decomposition of is given, then N>> can be computed with *d**log d|)>> arithmetic operations in >. <\proof> The total number of coefficients in is |)>> by inequality. We decompose <\equation> f=x*g+x*g+\+x*g, where *g,x,\,x|)>> is made of the terms of which are multiple of >, then *g> is made of the terms of *g> which are multiple of >,..., and finally *g> is made of the terms of *g+\+x*g|)>> which are multiple of > (that is a >-multiple of apower of >). In this way, we are led to compute \N> for ,n>, with > of degreed-1>; this requires *d**log d|)>> operations in >, by Lemma. Then N> can be recovered with further *d|)>> operations. <\remark> If one can use specific sequences of points >, for instance in geometric progressions, then multipoint evaluations and interpolations in one variable and in degree over > cost |)>> by means of, that saves a factor of in the above complexity estimates. For the purpose of the present paper, we need to adapt the results of the previous subsection to the case when > is the Galois Ring >,k|)>>, and in the context of Turing machines. In the next lemmas we use the lexicographic order on >, written >, definedby <\equation*> \\\\j\,n|}>,\=\\\\\=\\\\\|)>. In terms of Turing machines, we need the following variants of Lemmas and. <\lemma> Let \1>, let GR>,k|)>,\,x|]>> be of partial degree \> in > for ,n>, and let ,\,\>> be values in >,k|)>>. Then, the values>,\,\>|)>> for ,\,i|)>> running over ,\|}>> in the lexicographic order> can be computed in time <\equation*> n*\* \*\*k*log p|)>. <\proof> The proof follows the one of Lemma while taking data reorganizations into account. More precisely, using one \\> matrix transposition, we reorganize the values of the > after the recursive calls into the sequence of <\equation*> f>,\,\>|)>,\,f-1>>,\,\>|)> for ,\,i|)>> running over ,\|}>> in the lexicographic order>. Then, after the multi-point evaluations of >,\,\>,x|)>>, we need to transpose the \\> array made of the values of , in order to ensure the lexicographic ordering in the output. The cost of these transpositions is *log \*\*k*log p|)>> by Lemma, which is negligible. <\lemma> Assume \1> and \\>. Let ,\,\>> be pairwise distinct values in >,k|)>>> such that -\> is invertible modulo for all j>, and let ,\,i>> be a family of values in >,k|)>> for ,\,i|)>> running over ,\|}>> in the lexicographic order>. The unique polynomial GR>,k|)>,\,x|]>> of partial degree\> in > for ,n>, and such that >,\,\>|)>=\,\,i>> for all ,\,i|)>> in ,\|}>>, can be computed in time <\equation*> n*\* \*\*k*log p|)>. <\proof> The proof follows the one of Lemma, by doing the data reorganizations in the opposite direction from the one in the proof of Lemma. From now, for convenience, we discard the case =1>. In this way, whenever \2>, we may use >=log>|)>>. <\proposition> Assume \2> and \\>. Let GR>,k|)>,\,x|]>> be of partial degree \> in > for ,n>, and let be a n> matrix over >,k|)>>. If an LU-decomposition of is given, then N> can be computed in time *\*k*log p|)>>. <\proof> We first generate a subset ,\,\>|}>> of > of cardinality > in time *k*log p|)>>; this ensures the invertibility of -\> for j>. The proof then follows the one of Proposition while taking data reorganizations into account. When is upper triangular, the computation of ,\,g>> requires the multi-point evaluation of regarded in >,k|)>,\,x|]>|]>>: we may simply appeal to the fast univariate algorithm because it only involves additions, subtractions and products by elements of >,k|)>> over the ground ring >,k|)>,\,x|]>>. Consequently ,\,g>> may be obtained in time **\*k*log p|)>>, by Lemma. In addition, the \\> array of values of the > must be transposed at the end, in order to guarantee the lexicographic ordering necessary to interpolate N>. When is lower triangular, the data reorganization costs essentially the same, except that the computation of ,\,g>> takes time **\*k*log p|)>> by Lemmas and. Before achieving the proof of Proposition, we further need the following lemma in order to change the representation of a homogeneous polynomial. <\lemma> Let be a homogeneous polynomial of degree 2> in >,k|)>,\,x|]>>, represented as before by >,\,x|)>\f,\,x|)>> and , and let ,n|}>>. Then, for any \GR>,k|)>> we can compute >,\,x,x,\,x|)>\f,\,x,\,x,\,x|)>> in time *\*k*log p|)>>. <\proof> For simplicity the proof is done for , but it extends in a coefficientwise manner to any . A sparse representation of is made of a sequence of pairs of coefficients and vector exponents. More precisely, if \>f*x>*\*x>> then a sparse representation of it is the sequence of the pairs ,e|)>>, for all the non-zero coefficients >. The bit size of avector exponent is >, and therefore the bit size of a sparse representation of is **\*k*log p|)>> by. In order to prove the lemma, we first convert , given in dense representation, into a sparse representation. When the sparse representation of >> may be obtained in time *k*log p|)>>. Otherwise 2> and we regard >> in >,k|)>,\,x|]>|]>>, <\equation*> f>,\,x|)>=f>,\,x|)>+\+f>,\,x|)>*x, and recursively compute the sparse representation of >> for ,d>. These representations may naturally be glued together into a sparse representation of >>, in time **\*k*log p|)>>, by adding the exponent of > into each exponent vector. A straightforward induction leads to a total time **\*k*log p|)>> for the change of representation of >>. Then the sparse representation of may be deduced with additional time **\*k*log p|)>> by appending the exponent of > needed for homogenization. Second, from the latter sparse representation of we may simply discard the exponents of > and multiply the coefficients with the corresponding powers of >, in order to obtain a sparse representation of >> in time *\*k*log p|)>>. Finally it remains to construct the dense representation of >> from its sparse representation. To this aim we sort the sparse representation in increasing lexicographic order on the exponent vectors in time *log|)>**\*k*log p|)>>. We next compute the dense representation by induction over . Writing <\equation*> f>,\,x|)>=f>,\,x|)>+\+f-1>>,\,x|)>*x-1>, the sparse representations of >,\,f-1>>> are computed by induction, after removal of the powers of >. The induction ends when , in which case the conversion to dense representation requires time *k*log p|)>>. In total, the dense representation of >> can be computed in time *log|)>**\*k*log p|)>>. <\render-proof|Proof of Proposition> We follow the proofs of Lemma and Proposition, still while taking into account the cost of data reorganizations. In the proof of Lemma, the cost of obtaining ,x,\,x|)>> and ,\,x,N|)>>> is given by Lemma, that is *\*k*log p|)>>.\ In the proof of Proposition we first need to compute the decomposition of. The polynomial <\equation*> g,\,x|)>=f,\,x|)>+f,\,x|)>*x+\+f,\,x|)>*x is represented by <\equation*> g>,\,x|)>\f>,\,x|)>+f>,\,x|)>*x+\+f>,\,x|)>*x and . Consequently >> may be easily obtained in time *\*k*log p|)>>. Then the rest of the decomposition >,\,g>> is obtained from >,\,x|)>>, recursively. The total cost for obtaining all the >> is therefore bounded by *\*k*log p|)>>. For any GR>,k|)>>, any ,n|}>>, and any ,n|}>>, the computations of \N|)>>,\,x|)>> and of *\N|)>,\,x|)>> take time **k*log p|)>> since their supports have cardinality |)>> by. Finally, from <\equation*> f\N=N*x|)>*\N|)> we obtain the representation of N> as <\equation*> N|)>,\,x|)>=+N*x|)>*\N|)>,\,x|)>, using additional time *\*k*log p|)>>. The cost of the data reorganizations in the proof of Proposition is negligible. <\bibliography|bib|plain|polexp.bib> <\bib-list|10> M.Agrawal, N.Kayal, and N.Saxena. PRIMES is in P. , pages 781\U793, 2004. B.Bank, M.Giusti, J.Heintz, G.Lecerf, G.Matera, and P.Solernó. Degeneracy loci and polynomial equation solving. , 15(1):159\U184, 2015. M.Bardet. . PhD thesis, Université Pierre et Marie Curie - Paris VI, 2004. . M.Bardet, J.-C. Faugère, and B.Salvy. On the complexity of the > Gröbner basis algorithm. Symbolic Comput.>, 70:49\U70, 2015. S.J. Berkowitz. On computing the determinant in small parallel time using a small number of processors. , 18:147\U150, 1984. J.Berthomieu, J.vander Hoeven, and G.Lecerf. Relaxed algorithms for p-adic numbers. Théor. Nombres Bordeaux>, 23(3), 2011. J.Berthomieu, G.Lecerf, and G.Quintin. Polynomial root finding over local rings and application to error correcting codes. , 24(6):413\U443, 2013. A.Bostan, F.Chyzak, M.Giusti, R.Lebreton, G.Lecerf, B.Salvy, and ÉSchost. . Frédéric Chyzak (self-published), Palaiseau, 2017. Electronic version available from . A.Bostan, Ph. Flajolet, B.Salvy, and É.Schost. Fast computation of special resultants. Symbolic Comput.>, 41(1):1\U29, 2006. A.Bostan and É.Schost. Polynomial evaluation and interpolation on special sets of points. Complexity>, 21(4):420\U446, 2005. R.P. Brent and H.T. Kung. Fast algorithms for manipulating formal power series. ACM>, 25(4):581\U595, 1978. W.D. Brownawell. Bounds for the degrees in the Nullstellensatz. , 126(3):577\U591, 1987. P.Bürgisser, M.Clausen, and M.A. Shokrollahi. , volume 315 of . Springer-Verlag, 1997. J.F. Canny, E.Kaltofen, and L.Yagati. Solving systems of nonlinear polynomial equations faster. In , ISSAC '89, pages 121\U128, New York, NY, USA, 1989. ACM. D.G. Cantor and E.Kaltofen. On fast multiplication of polynomials over arbitrary algebras. , 28:693\U701, 1991. J.-M. Couveignes and R.Lercier. Fast construction of irreducible polynomials over finite fields. Math.>, 194(1):77\U105, 2013. C.D'Andrea, A.Ostafe, I.E. Shparlinski, and M.Sombra. Reduction modulo primes of systems of polynomial equations and algebraic dynamical systems. , 371(2):1169\U1198, 2019. C.Durvye and G.Lecerf. A concise proof of the Kronecker polynomial system solver from scratch. , 26(2):101\U139, 2008. J.-C. Faugère, P.Gaudry, L.Huot, and G.Renault. Sub-cubic change of ordering for Gröbner basis: A probabilistic approach. In , ISSAC '14, pages 170\U177, New York, NY, USA, 2014. ACM. J.-C. Faugère, P.Gianni, D.Lazard, and T.Mora. Efficient computation of zero-dimensional Gröbner bases by change of ordering. Symbolic Comput.>, 16(4):329\U344, 1993. J.vonzur Gathen and J.Gerhard. . Cambridge University Press, New York, 3rd edition, 2013. N.Giménez and G.Matera. On the bit complexity of polynomial system solving. Complexity>, 51:20\U67, 2019. M.Giusti. Some effectivity problems in polynomial ideal theory. In J.Fitch, editor, , pages 159\U171, Berlin, Heidelberg, 1984. Springer Berlin Heidelberg. M.Giusti, K.Hägele, J.Heintz, J.L. Montaña, J.E. Morais, and L.M. Pardo. Lower bounds for Diophantine approximations. Pure Appl. Algebra>, 117/118:277\U317, 1997. M.Giusti, J.Heintz, J.E. Morais, J.Morgenstern, and L.M. Pardo. Straight-line programs in geometric elimination theory. Pure Appl. Algebra>, 124(1-3):101\U146, 1998. M.Giusti, J.Heintz, J.E. Morais, and L.M. Pardo. When polynomial equation systems can be \Psolved\Q fast? In , volume 948 of , pages 205\U231. Springer-Verlag, 1995. M.Giusti, G.Lecerf, and B.Salvy. A Gröbner free alternative for polynomial system solving. complexity>, 17(1):154\U211, 2001. B.Grenet, J.vander Hoeven, and G.Lecerf. Deterministic root finding over finite fields using Graeffe transforms. , 27(3):237\U257, 2016. D.Harvey and J.vander Hoeven. Faster polynomial multiplication over finite fields using cyclotomic coefficient rings. Complexity>, 54:101404, 2019. D.Harvey and J.vander Hoeven. Integer multiplication in time >. Technical report, HAL, 2019. . D.Harvey and J.vander Hoeven. Polynomial multiplication over finite fields in time >. Technical report, HAL, 2019. . J.Heintz. Definability and fast quantifier elimination in algebraically closed fields. , 24(3):239\U277, 1983. J.vander Hoeven and G.Lecerf. Modular composition via complex roots. Technical report, CNRS & École polytechnique, 2017. . J.vander Hoeven and G.Lecerf. Modular composition via factorization. Complexity>, 48:36\U68, 2018. J.vander Hoeven and G.Lecerf. Accelerated tower arithmetic. Complexity>, 55:101402, 2019. J.vander Hoeven and G.Lecerf. Fast multivariate multi-point evaluation revisited. Complexity>, 56:101405, 2020. J.vander Hoeven, G.Lecerf, B.Mourrain, etal. Mathemagix, From 2002. . Xiaohan Huang and V.Y. Pan. Fast rectangular matrix multiplication and applications. Complexity>, 14(2):257\U299, 1998. G.Jeronimo and J.Sabia. Effective equidimensional decomposition of affine varieties. Pure Appl. Algebra>, 169(2\U3):229\U248, 2002. E.Kaltofen and V.Shoup. Fast polynomial factorization over high algebraic extensions of finite fields. In , ISSAC '97, pages 184\U188, New York, NY, USA, 1997. ACM. K.S. Kedlaya and C.Umans. Fast modular composition in any characteristic. In , pages 146\U155, Washington, DC, USA, 2008. IEEE Computer Society. K.S. Kedlaya and C.Umans. Fast polynomial factorization and modular composition. Comput.>, 40(6):1767\U1802, 2011. T.Krick, L.M. Pardo, and M.Sombra. Sharp estimates for the arithmetic Nullstellensatz. , 109(3):521\U598, 2001. L.Kronecker. Grundzüge einer arithmetischen Theorie der algebraischen Grössen. , 92:1\U122, 1882. Y.N. Lakshman. On the complexity of computing a Gröbner basis for the radical of a zero dimensional ideal. In , STOC '90, pages 555\U563, New York, NY, USA, 1990. ACM. Y.N. Lakshman. A single exponential bound on the complexity of computing Gröbner bases of zero dimensional ideals. In T.Mora and C.Traverso, editors, , pages 227\U234, Boston, MA, 1991. Birkhäuser Boston. Y.N. Lakshman and D.Lazard. On the complexity of zero-dimensional algebraic systems. In T.Mora and C.Traverso, editors, , pages 217\U225, Boston, MA, 1991. Birkhäuser Boston. D.Lazard. Gröbner bases, Gaussian elimination and resolution of systems of algebraic equations. In J.A. Hulzen, editor, , pages 146\U156. Springer Berlin Heidelberg, 1983. F.LeGall. Powers of tensors and fast matrix multiplication. In K.Nabeshima, editor, , pages 296\U303, New York, NY, USA, 2014. ACM. G.Lecerf. Computing the equidimensional decomposition of an algebraic closed set by means of lifting fibers. Complexity>, 19(4):564\U596, 2003. G.Lecerf. On the complexity of the Lickteig\URoy subresultant algorithm. Symbolic Comput.>, 92:243\U268, 2019. P.Lelong. Mesure de Mahler et calcul de constantes universelles pour les polynomes de variables. , 299(1):673\U695, 1994. H.Matsumura. , volume8 of . Cambridge university press, 1989. D.McKinnon. An arithmetic analogue of Bezout's theorem. , 126(2):147\U155, 2001. J.M. McNamee and V.Y. Pan. II>, volume16 of . Elsevier, 2013. B.Mourrain, V.Y. Pan, and O.Ruatta. Accelerated solution of multivariate polynomial systems of equations. Comput.>, 32(2):435\U454, 2003. B.Mourrain and Ph. Trébuchet. Solving projective complete intersection faster. In , ISSAC '00, pages 234\U241, New York, NY, USA, 2000. ACM. B.Mourrain and Ph. Trébuchet. Generalized normal forms and polynomial system solving. In , ISSAC '05, pages 253\U260, New York, NY, USA, 2005. ACM. B.Mourrain and Ph. Trébuchet. Border basis representation of a general quotient algebra. In , ISSAC '12, pages 265\U272, New York, NY, USA, 2012. ACM. A.K. Narayanan. Fast computation of isomorphisms between finite fields using elliptic curves. In L.Budaghyan and F.Rodríguez-Henríquez, editors, , volume 11321 of , pages 74\U91. Springer, Cham, 2018. C.H. Papadimitriou. . Addison-Wesley, 1994. P.Philippon. Sur des hauteurs alternatives. I. , 289(1):255\U283, 1991. A.Poteaux and É.Schost. On the complexity of computing with zero-dimensional triangular sets. Symbolic Comput.>, 50:110\U138, 2013. A.Schönhage. Schnelle Berechnung von Kettenbruchentwicklungen. , 1(2):139\U144, 1971. A.Schönhage, A.F.W. Grotefeld, and E.Vetter. . B.I. Wissenschaftsverlag, Mannheim, 1994. J.T. Schwartz. Fast probabilistic algorithms for verification of polynomial identities. ACM>, 27(4):701\U717, 1980. V.Shoup. New algorithms for finding irreducible polynomials over finite fields. , 54(189):435\U447, 1990. P.S. Wang. A p-adic algorithm for univariate partial fractions. In , SYMSAC '81, pages 212\U217, New York, NY, USA, 1981. ACM. R.Zippel. Probabilistic algorithms for sparse polynomials. In , number72 in Lect. Notes Comput. Sci., pages 216\U226. Springer-Verlag, 1979. <\initial> <\collection> <\attachments> <\collection> <\associate|bib-bibliography> <\db-entry|+NuHCznHJqSBdBW|book|GaGe13> <|db-entry> J. > <\db-entry|+NuHCznHJqSBdAA|article|GiLeSa2001> <|db-entry> G. B. > complexity> <\db-entry|+NuHCznHJqSBdAC|article|GiMa2016> <|db-entry> G. > Complexity> <\db-entry|+NuHCznHJqSBdBi|article|Kronecker1882> <|db-entry> > <\db-entry|+NuHCznHJqSBdBh|article|GiHaHeMoMoPa1997> <|db-entry> K. J. J. L. J. E. L. M. > Pure Appl. Algebra> <\db-entry|+NuHCznHJqSBdBg|article|GiHeMoMoPa1998> <|db-entry> J. J. E. J. L. M. > Pure Appl. Algebra> <\db-entry|+NuHCznHJqSBdBf|incollection|GiHeMoPa1995> <|db-entry> J. J. E. L. M. > <\db-entry|+NuHCznHJqSBdAB|article|DuLe2008> <|db-entry> G. > <\db-entry|+NuHCznHJqSBdBe|book|BuClSh1997> <|db-entry> M. M. A. > <\db-entry|+NuHCznHJqSBdAD|inproceedings|Giusti1984> <|db-entry> > > <\db-entry|+NuHCznHJqSBdAF|inproceedings|Lakshman1990> <|db-entry> > <\db-entry|+NuHCznHJqSBdAE|inproceedings|Lakshman1991> <|db-entry> > C. > <\db-entry|+NuHCznHJqSBdAG|inproceedings|LaLa1991> <|db-entry> D. > C. > <\db-entry|+NuHCznHJqSBdAH|inproceedings|Lazard1983> <|db-entry> > > <\db-entry|+NuHCznHJqSBdAI|article|BaFaSa2015> <|db-entry> J.-C. B. > > Gröbner basis algorithm> Symbolic Comput.> <\db-entry|+NuHCznHJqSBdB6|book|aecf-2017-livre> <|db-entry> F. M. R. G. B. É > > <\db-entry|+NuHCznHJqSBdBj|inproceedings|CaKaYa1989> <|db-entry> E. L. > <\db-entry|+NuHCznHJqSBdAL|article|MoPaRu2003> <|db-entry> V. Y. O. > Comput.> <\db-entry|+NuHCznHJqSBdAK|phdthesis|Bardet2004> <|db-entry> > > <\db-entry|+NuHCznHJqSBdBs|inproceedings|MourrainTrebuchet2000> <|db-entry> Ph. > <\db-entry|+NuHCznHJqSBdBt|inproceedings|MourrainTrebuchet2005> <|db-entry> Ph. > <\db-entry|+NuHCznHJqSBdBu|inproceedings|MourrainTrebuchet2012> <|db-entry> Ph. > <\db-entry|+NuHCznHJqSBdAS|article|FaGiLaMo1993> <|db-entry> P. D. T. > Symbolic Comput.> <\db-entry|+NuHCznHJqSBdAM|inproceedings|FaGaHuRe2014> <|db-entry> P. L. G. > <\db-entry|+NuHCznHJqSBdAN|inproceedings|LeGall2014> <|db-entry> > > <\db-entry|+NuHCznHJqSBdAc|article|BrentKung1978> <|db-entry> H. T. > ACM> <\db-entry|+NuHCznHJqSBdAe|inproceedings|KaltofenShoup1997> <|db-entry> V. > <\db-entry|+NuHCznHJqSBdAg|article|HuangPan1998> <|db-entry> V. Y. > Complexity> <\db-entry|+2EIRPFE3ONuyFfG|article|vdH:tower> <|db-entry> G. > Complexity> <\db-entry|+NuHCznHJqSBdAO|inproceedings|KedlayaUmans2008> <|db-entry> C. > <\db-entry|+NuHCznHJqSBdAP|article|KedlayaUmans2011> <|db-entry> C. > Comput.> <\db-entry|+NuHCznHJqSBdBQ|article|PoteauxSchost2013> <|db-entry> É. > Symbolic Comput.> <\db-entry|+1q91RBJy16WxsrRV|article|vdH:kucomp> <|db-entry> G. > Complexity> <\db-entry|+NuHCznHJqSBdAn|techreport|vdH:zcomp> <|db-entry> G. > > <\db-entry|+NuHCznHJqSBdAo|article|vdH:ffcomp> <|db-entry> G. > Complexity> <\db-entry|+NuHCznHJqSBdBH|article|BaGiHeLeMaSo2015> <|db-entry> M. J. G. G. P. > <\db-entry|+NuHCznHJqSBdBN|book|McNameePan2013> <|db-entry> V. Y. > II> <\db-entry|+NuHCznHJqSBdBY|book|Pap94> <|db-entry> > <\db-entry|+NuHCznHJqSBdAU|book|Schonhage1994> <|db-entry> A.F.W. E. > I. Wissenschaftsverlag> <\db-entry|+2EIRPFE3ONuyFfA|techreport|vdH:nlogn> <|db-entry> J. van der > >> > <\db-entry|+NuHCznHJqSBdAy|article|Schonhage1971> <|db-entry> > <\db-entry|+NuHCznHJqSBdBm|article|BeHoLe2011> <|db-entry> J. van der G. > Théor. Nombres Bordeaux> <\db-entry|+NuHCznHJqSBdAz|article|CaKa91> <|db-entry> E. > <\db-entry|+2EIRPFE3ONuyFfN|article|vdH:cyclomult> <|db-entry> J. van der > Complexity> <\db-entry|+2EIRPFE3ONuyFf9|techreport|vdH:ffnlogn> <|db-entry> J. van der > >> > <\db-entry|+NuHCznHJqSBdB9|article|Berkowitz1984> <|db-entry> > <\db-entry|+NuHCznHJqSBdBk|article|CoLe2013> <|db-entry> R. > Math.> <\db-entry|+NuHCznHJqSBdBl|inproceedings|Narayanan2016> <|db-entry> > F. > <\db-entry|+NuHCznHJqSBdB5|article|Shoup1990> <|db-entry> > <\db-entry|+NuHCznHJqSBdBT|article|BoFlSaSc2006> <|db-entry> Ph. B. É. > Symbolic Comput.> <\db-entry|+NuHCznHJqSBdAq|article|GrHoLe2016> <|db-entry> J. G. > <\db-entry|+NuHCznHJqSBdB0|article|Lecerf2017> <|db-entry> > Symbolic Comput.> <\db-entry|+NuHCznHJqSBdB7|article|Schwartz1980> <|db-entry> > ACM> <\db-entry|+NuHCznHJqSBdB8|inproceedings|Zippel1979> <|db-entry> > <\db-entry|+NuHCznHJqSBdBL|article|Heintz1983> <|db-entry> > <\db-entry|+NuHCznHJqSBdBo|book|Matsumura1989> <|db-entry> > <\db-entry|+NuHCznHJqSBdBR|article|BeLeQu2013> <|db-entry> G. G. > <\db-entry|+NuHCznHJqSBdBD|article|Philippon1991> <|db-entry> > <\db-entry|+NuHCznHJqSBdBE|article|Lelong1994> <|db-entry> > variables> <\db-entry|+NuHCznHJqSBdBF|article|KrPaSo2001> <|db-entry> L. M. M. > <\db-entry|+NuHCznHJqSBdBA|article|AnOsShSo2017> <|db-entry> A. I. E. M. > <\db-entry|+NuHCznHJqSBdBI|article|AgKaSa2004> <|db-entry> N. N. > <\db-entry|+NuHCznHJqSBdBP|inproceedings|Wang1981> <|db-entry> > <\db-entry|+2EIRPFE3ONuyFfK|misc|vdH:mmx> <|db-entry> G. B. > > <\db-entry|+NuHCznHJqSBdBp|article|Lecerf2003> <|db-entry> > Complexity> <\db-entry|+NuHCznHJqSBdBB|article|McKinnon2001> <|db-entry> > <\db-entry|+NuHCznHJqSBdBC|article|Brownawell1987> <|db-entry> > <\db-entry|+NuHCznHJqSBdBq|article|JeSa2002> <|db-entry> J. > Pure Appl. Algebra> <\db-entry|+NuHCznHJqSBdAZ|article|BostanSchost2005> <|db-entry> É. > Complexity> <\references> <\collection> |R>|1>> |R>|1>> |R>|1>> |V>|17>> |V>|17>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> GaGe13 GiLeSa2001 GiMa2016 Kronecker1882 GiHaHeMoMoPa1997 GiHeMoMoPa1998 GiHeMoPa1995 DuLe2008 BuClSh1997 GiLeSa2001 GaGe13 Giusti1984 Lakshman1990 Lakshman1991 LaLa1991 Lazard1983 BaFaSa2015 aecf-2017-livre CaKaYa1989 MoPaRu2003 Bardet2004 BaFaSa2015 MourrainTrebuchet2000 MourrainTrebuchet2005 MourrainTrebuchet2012 FaGiLaMo1993 FaGaHuRe2014 LeGall2014 BrentKung1978 KaltofenShoup1997 HuangPan1998 vdH:tower KedlayaUmans2008 KedlayaUmans2011 KedlayaUmans2011 PoteauxSchost2013 KedlayaUmans2008 KedlayaUmans2011 PoteauxSchost2013 vdH:kucomp vdH:zcomp vdH:ffcomp BaGiHeLeMaSo2015 McNameePan2013 Pap94 GaGe13 Schonhage1994 vdH:nlogn Schonhage1971 BeHoLe2011 vdH:kucomp CaKa91 vdH:cyclomult vdH:ffnlogn GaGe13 vdH:kucomp vdH:kucomp vdH:kucomp vdH:kucomp vdH:kucomp vdH:kucomp vdH:kucomp vdH:kucomp KedlayaUmans2008 KedlayaUmans2011 vdH:kucomp vdH:kucomp vdH:kucomp vdH:kucomp vdH:kucomp vdH:kucomp GaGe13 Berkowitz1984 CoLe2013 Narayanan2016 Shoup1990 BoFlSaSc2006 GrHoLe2016 Lecerf2017 DuLe2008 DuLe2008 Schwartz1980 Zippel1979 DuLe2008 DuLe2008 DuLe2008 DuLe2008 Heintz1983 DuLe2008 Matsumura1989 Berkowitz1984 DuLe2008 GiLeSa2001 GiLeSa2001 DuLe2008 GiLeSa2001 GiLeSa2001 BeLeQu2013 GiLeSa2001 GaGe13 Lecerf2017 DuLe2008 GiLeSa2001 GiLeSa2001 GiLeSa2001 Philippon1991 Lelong1994 KrPaSo2001 KrPaSo2001 AnOsShSo2017 GiMa2016 GaGe13 AgKaSa2004 Wang1981 GaGe13 GiLeSa2001 vdH:mmx Lecerf2003 McKinnon2001 Brownawell1987 BaGiHeLeMaSo2015 GiLeSa2001 KedlayaUmans2011 vdH:kucomp vdH:kucomp vdH:zcomp vdH:ffcomp JeSa2002 Lecerf2003 DuLe2008 BostanSchost2005 <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |Notations |.>>>>|> > |1.1.Related work |.>>>>|> > |1.2.Our contributions |.>>>>|> > |math-font-series||font-shape||2.Complexity model and basic operations> |.>>>>|> |2.1.Basic data types |.>>>>|> > |Integers |.>>>>|> > |Arrays |.>>>>|> > |Univariate polynomials |.>>>>|> > |Multivariate polynomials |.>>>>|> > |2.2.Multivariate multi-point evaluation |.>>>>|> > |2.3.Linear changes of variables |.>>>>|> > |math-font-series||font-shape||3.Fast multivariate modular composition> |.>>>>|> |3.1.Reduction to evaluation |.>>>>|> > |3.2.Main complexity bound |.>>>>|> > |3.3.Extension of the residue field |.>>>>|> > |math-font-series||font-shape||4.Data structures and randomness issues> |.>>>>|> |4.1.Noether position |.>>>>|> > |4.2.Primitive element |.>>>>|> > |4.3.Lifting fiber |.>>>>|> > |4.4.Random parameter summary |.>>>>|> > |4.5.A posteriori verification |.>>>>|> > |math-font-series||font-shape||5.The Kronecker solver> |.>>>>|> |5.1.Lifting step |.>>>>|> > |5.2.Intersection step |.>>>>|> > |5.3.Total complexity |.>>>>|> > |math-font-series||font-shape||6.Systems over the rational numbers> |.>>>>|> |6.1.|p>-adic Hensel lifting |.>>>>|> > |6.2.Chow form and height |.>>>>|> > |6.3.Bit size of Kronecker parametrizations |.>>>>|> > |6.4.Lucky primes|> |.>>>>|> > |6.5.Top level algorithm over the rational numbers |.>>>>|> > |math-font-series||font-shape||7.Conclusion> |.>>>>|> |math-font-series||font-shape||Appendix A.Linear changes of variables> |.>>>>|> |A.1.Algebraic complexity model |.>>>>|> > |A.2.Coefficients in a Galois ring |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|>