On the complexity exponent of
polynomial system solving

Joris van der Hoevena, Grégoire Lecerfb

CNRS (UMR 7161, LIX)

Laboratoire d'informatique de l'École polytechnique

Campus de l'École polytechnique

1, rue Honoré d'Estienne d'Orves

Bâtiment Alan Turing, CS35003

91120 Palaiseau, France

a. Email: vdhoeven@lix.polytechnique.fr

b. Email: lecerf@lix.polytechnique.fr

Final version of March 2020

We present a probabilistic Las Vegas algorithm for solving sufficiently generic square polynomial systems over finite fields. We achieve a nearly quadratic running time in the number of solutions, for densely represented input polynomials. We also prove a nearly linear bit complexity bound for polynomial systems with rational coefficients. Our results are obtained using the combination of the Kronecker solver and a new improved algorithm for fast multivariate modular composition.

Keywords: Polynomial system solving, Geometric resolution, Complexity bounds

A.M.S. subject classification: 14-04, 14Q20, 14B05, 68W30

1.Introduction

Let be an effective field. This means that we have a data structure for representing elements of , together with algorithms for performing the field operations. Consider a homogeneous system of polynomial equations

where . We are interested in the exact resolution of such a system through the computation of a parametrization of its zero-set by a so-called primitive element; see section 4.2 for a precise definition.

Throughout this paper, we assume that are given in standard dense representation. The total degree of each is written , and we let . Our algorithm requires the following regularity conditions:

R1

is a regular sequence;

R2

The intermediate ideals are absolutely radical, which means radical over the algebraic closure of , for ;

R3

, for .

Conditions R1 and R2 formalize the idea of a “generic system” in the sense that they are likely to hold when the admit random coefficients. Under these conditions it is well known that the system admits exactly geometrically regular solutions in the projective space over . Condition R3 is included for the sake of simplicity. It is not restrictive because linear equations can be removed by means of linear Gaussian elimination, and by performing a suitable linear change of the variables; see Remark 5.6.

We prove new probabilistic complexity bounds for computing all the solutions in terms of a univariate parametrization by a primitive element. For finite fields , our bound is arbitrarily close to quadratic in the number of the solutions whenever : see Corollary 5.5. For rational coefficients of bounded height, we deduce a complexity bound that is nearly linear in the expected bit size of the output of the algorithm: see Theorem 6.11. These results improve upon the best previously known complexity bounds.

Notations

We set . In order to simplify the presentation of complexity bounds, we use soft-Oh notation: means that ; see [21, chapter 25, section 7] for technical details. The least integer larger or equal to is written ; the largest one smaller or equal to is written .

The -vector space of polynomials of degree is written . The remainder of the division of by is written . In order to express that an expression is explicitly computed modulo , we write .

The Galois ring of characteristic and cardinality , written , is defined as

with monic and irreducible modulo .

1.1.Related work

The complexity of polynomial system solving is a central question in effective algebraic geometry, which is still open. Except in some very special cases, no softly linear time algorithms are known. In order to simplify the discussion, we assume that for .

One known favorable special case concerns systems of bounded degrees over that satisfy R1 and R2: in suitable bit complexity models (computation trees, or random access memory machines), the variant of the Kronecker solver designed by Giusti, Lecerf and Salvy [27] admits a softly linear running time for . This solver is probabilistic of Las Vegas type, with a well understood probability of failure that can easily be made arbitrarily small in practice; for details, see the recent work by Giménez and Matera [22]. The Kronecker algorithm has a long history, lying deep in Kronecker's work [44], but it emerged after a series of seminal articles by Giusti, Heintz, Matera, Morais, Pardo, and their collaborators in the nineties [24-26]; for the history of this algorithm along with references, we refer to the introduction of [18].

Assume that the input polynomials are given as a straight-line program of size over without division (see the definition in [13] for instance) and that we use the computation tree model for our complexity measures. If the cardinality of is sufficiently large and the conditions R1 and R2 hold, then the Kronecker solver computes a parametrization of the solution set in terms of a primitive element using

(1.1)

operations in and with a uniformly bounded probability of failure. The complexity bound (1.1) essentially comes from [27, Theorem 1] by taking into account that ; this is detailed in section 5 below. If the are all bounded, then we notice that , so the bound (1.1) is essentially quadratic. In the other extreme case when is bounded, we rather have , so the bound (1.1) becomes essentially cubic. In this paper, we show how to obtain a nearly quadratic bound for this case as well.

Gröbner bases constitute another cornerstone for polynomial system solving. Algorithms in this setting are usually deterministic, but their complexities are highly intricate in general: they depend on both the system and the admissible ordering attached to monomials. Complexities of unfavourable cases are doubly exponential in ; see a summary of known families in [21, chapter 21]. The first simply exponential bounds of the form for zero dimensional systems are due to Giusti, Lazard and Lakshman in the eighties [23, 45-48]. The constant hidden behind the latter “ may be related to a feasible exponent for linear algebra, in the sense that any two square matrices of size may be multiplied with operations in their coefficient ring. Basically one may perform Gaussian elimination on Macaulay matrices: for instance, under R1 and for the graduated reverse lexicographic ordering, the resolution requires operations in , where is the Macaulay bound; see [4], or [8, chapter 26] for a proof from scratch, for instance.

At present time it is not known how to exploit the structure of Macaulay matrices to perform Gaussian elimination in time quadratic in or even in . One contribution to this problem, due to Canny, Kaltofen, and Yagati [14], relies on sparse polynomial interpolation and the Wiedemann solver. Further important results based on structured linear algebra have been established by Mourrain, Pan, and Ruatta [56]: they proved that the number of roots and of real roots can be computed with floating point operations, and that all the (real) roots in a given box up to the error can be computed with floating point operations, where and depend linearly on as well as on the conditioning of the system. Yet another important advance here is Faugère's F5 algorithm, which suppresses useless rows in Macaulay matrices. However, its worst case complexity remains essentially cubic in [3, 4]. Other methods derived or inspired by Macaulay's work have been developed by Mourrain and Trébuchet [57-59].

For Gröbner bases of zero dimensional systems, monomial orderings may be changed quite conveniently by means of linear algebra. This is known as the FGLM algorithm (named after the initials of its authors, Faugère, Gianni, Lazard and Mora [20]) and involves a cost . Recently, Faugère, Gaudry, Huot and Renault [19] decreased it to . Nevertheless one must keep in mind that the best practical values for are only about , which is far from the best known theoretical bound , due to Le Gall [49].

Another aspect of the present paper concerns multivariate modular composition, that is the computation of where , has degree , and are in . When , the best known complexity bound for any field is due to Brent and Kung [11]. Their algorithm even yields a sub-quadratic cost when using fast linear algebra; see [40, p. 185]. Here the constant is such that a matrix over may be multiplied with another rectangular matrix with operations in . Huang and Pan proved that ; see [38, Theorem 10.1]. In [35, section 3] we extended Brent and Kung's result to the multivariate setting. A major breakthrough for the modular composition problem is due to Kedlaya and Umans [41, 42], in the case when is a finite field and more generally a finite ring of the form for any integer and monic of degree . For any fixed rational value , they showed that can be computed modulo in time whenever the partial degrees of are , and under the condition ; see [42, Theorem 7.1]. Extensions to compositions modulo triangular sets can be found in [63].

1.2.Our contributions

The first technical but important contribution of this paper concerns the refinement of Kedlaya and Umans' complexity result on modular composition [41, 42]. In fact, in Corollary 3.6 we achieve a sharper complexity bound in terms of the total degree of , that is softly linear in the bit size of the coefficients, and that holds without the assumption . We also handle fast evaluations of in a larger algebra of the form , where and is monic in of degree : this is a particular case of composition modulo triangular sets studied in [63], for which we also improve upon the dependency in the coefficient size. These new results are based on our recent advances on multivariate multi-point evaluation algorithms [36]. At a more abstract level, our algorithm applies to any field over which fast multi-point multivariate polynomial evaluation exists. We recall that fast multivariate multi-point evaluation remains an important open problem: no efficient algorithms are known over a general abstract field and we are not aware of any efficient implementations of Kedlaya and Umans' method; for some recent progress on special cases, we refer to [33, 34].

Our main contributions concern the complexity of polynomial system solving. We first prove a Las Vegas type probabilistic bit complexity bound over the finite field , for the dense representation of input polynomials, and where is any fixed rational value: see Corollary 5.5. Whenever , this bound is arbitrarily close to quadratic in the number of the roots of the system. This improves upon previously known bounds. For example, the complexity bound (1.1) simplifies to whenever . With input polynomials given in dense representation, the quantity is of the order . If all the remain bounded and if tends to infinity, then we have

Consequently, the expected complexity of the Kronecker solver becomes , in terms of the number of arithmetic operations in . However, if the are no longer bounded, then may grow with and the worst case complexity of the solver becomes .

Our next main result concerns the case . We assume that have integer coefficients of bit size , and we achieve expected time . Whenever , this time turns out to be nearly linear in the bit size of the output: see the precise bound in Theorem 6.11. This complexity bound admits major consequences for symbolic real solving, especially for algorithms based on the Kronecker solver; see for instance [2] and references therein. Once a Kronecker parametrization is known, it is also possible to appeal to univariate numerical solvers to deduce complex or real approximations of the roots of the system; see for instance the book [55].

Let us briefly describe the structure of this paper. Section 2 is devoted to prerequisites about the complexity model and fast multi-point polynomial evaluation. The next section 3 deals with various particular cases of multivariate modular composition involved in the Kronecker solver. Section 4 concerns the algebraic data structures needed by the solver, along with general position and some of the randomness aspects. The solver is itself recalled in section 5, that ends with our main result over the finite fields. Our main complexity bound over the rational numbers is stated in the final section 6.

It may be helpful to notice that the probability analysis of our algorithms contains two main stages. On the one hand, we need to put the system in general position and prepare the main data structures. This part is handled in section 4. On the other hand, for the sake of efficiency, some subalgorithms of the Kronecker solver involve additional random parameters that must be taken outside certain proper closed subvarieties (see Propositions 5.2 and 5.3), or integer parameters that must be coprime with a certain set of bad primes (see Lemma 6.7). The probability of picking suitable random parameters of this kind will be analyzed separately.

2.Complexity model and basic operations

Throughout this paper, we will be working in the Turing machine model [61] with sufficiently many tapes (seven tapes are usually sufficient in order to implement basic routines on polynomials, series, and matrices in a natural way). The Kronecker algorithm is randomized, so it also requires a special instruction to generate a random symbol in one cell within constant time.

In some cases, algebraic structures have a natural bit size (e.g. modular integers, finite fields); in other cases the size is variable (e.g. arrays, polynomials). In both cases elements are represented on tapes as sequences of symbols, followed by a specific termination symbol ””. The reader must keep in mind that heads of the machine can just move one cell left or right at each time step. Algorithms must take care of using data in the most contiguous way as possible, and loop counters cannot be used for free.

The rest of the section gathers standard data types and elementary operations needed in the next sections. We freely use well known complexity bounds on polynomials and matrices from [21]; details about Turing machine implementations of these algorithms can be found in [65].

2.1.Basic data types

Integers

We use binary representation for integers. A modular integer in is represented by its natural preimage in . Integers can be added in linear time and multiplied in softly linear time ; see [30] and historical references therein. Integer divisions and extended gcds in bit size also take softly linear time [64]. We will also use truncated -adic integers, for which we refer the interested reader to [6] for practical algorithms.

Arrays

One dimensional arrays are sequences of elements terminated by “”. For example the vector is stored as .

For bidimensional arrays we use column-major representation. This means that an array of size ( rows and columns), is stored as the vector of its columns, i.e. . Such arrays are allowed to contain elements of different types and sizes. For example the matrix over is stored as . In the Turing machine model, we recall that the transposition of a bidimensional array can be achieved in softly linear time:

Lemma 2.1. [36, Lemma 2.3] Let be an matrix. Let denote the size of for all , and define . Then can be transposed in time .

Univariate polynomials

For univariate polynomials we use dense representation, which means that a polynomial of degree is stored as the vector of its coefficients from degrees to . Additions and subtractions take linear time in . Multiplying two polynomials over finite fields may be achieved in softly linear time [15, 29, 31].

In the present paper we consider finite rings of the form with and monic of degree . Elements of are stored as their natural preimage in with coefficients in . Additions and subtractions in take linear time, products take time ; see for instance [21, part II].

Lemma 2.2. [36, adapted from Lemma 2.12] Let be as above and let be polynomials in . For any sequence of points in we may compute in time .

Lemma 2.3. [36, adapted from Lemma 2.14] Let be a Galois ring, let be elements in such that is invertible modulo for all , and let be vectors in . We may compute the unique polynomials in such that for and in time .

Multivariate polynomials

For a polynomial we use the recursive dense representation, by regarding as an element of . In particular, admits the same representation as its expansion as a univariate polynomial in . In our algorithms, the number of variables is not part of the representation of , so it must be supplied as a separate parameter.

The support of is defined as the set of monomials with non-zero coefficients and we write for its cardinality. Assuming that, apart from the mandatory trailing “#” symbol, the representations of coefficients in do not involve other “ symbols (this can always be achieved through suitable renamings ), we denote the number of “ symbols involved in the representation of by . We notice that . For the remainder of this subsection, we assume that the size of elements in is bounded by a constant .

Example 2.4. The univariate polynomial of degree is represented by . The bivariate polynomial is represented by .

Lemma 2.5. [36, Lemma 2.5] Let be of partial degree in for . Then , where .

Lemma 2.6. [36, Lemma 2.6] Let be a non-zero polynomial of total degree . Then , where .

Lemma 2.7. [36, Lemma 2.7] The partial degree bounds of a non-zero polynomial can be computed in time , where .

Lemma 2.8. [36, Lemma 2.8] The total degree of a polynomial can be computed in time , with the convention that .

Lemma 2.9. Let be as above. Any partial derivative of of total degree can be computed in time , where .

Proof. In order to compute the partial derivative in , we run over all the coefficients of in time by Lemma 2.6.

If we are interested in the derivative in with , then we reinterpret as an element of and its derivative requires time .

Lemma 2.10. Let be as above, let be of total degree , and let . Then may be computed in time , where .

Proof. We use Horner's method to evaluate univariate polynomials over at in softly linear time. In total, the machine runs over the entire representation of and the time spent in arithmetic operations is at most .

Lemma 2.11. [36, adapted from Lemma 2.13] Let be as above, let , and let be elements in . Then can be computed in time

where is the cardinality of the support of in the variables .

At several places we will use the following bound on the cardinality of the support of a polynomial in variables and total degree :

(2.1)

For the sake of efficiency, a homogeneous polynomial in the variables has a dedicated representation made of and . In this way, can be recovered as .

2.2.Multivariate multi-point evaluation

The problem called multivariate multi-point evaluation over an effective ring is the following: given and points in , compute . The first following statement concerns the cost of the naive evaluation algorithm in terms of the total degree of .

Lemma 2.12. [36, adapted from Lemma 3.8] Let with monic of degree be as above, let be of total degree . Then we may evaluate at a point in time .

In terms of the partial degrees of , we will need the following better complexity bounds that refine previous work by Kedlaya and Umans [41, 42]. The first theorem takes partial degrees into account, while the second one is in terms of the total degree.

Theorem 2.13. [36, Corollary 4.5] Let be a fixed rational value. Let with monic of degree , let be of partial degree in for , and let be a sequence of points in . Then we may compute in time

Theorem 2.14. [36, Corollary 4.6] Let be a fixed rational value. Let with monic of degree , let be of total degree , and let be a sequence of points in . Then we may compute in time

2.3.Linear changes of variables

The Kronecker solver needs to perform a linear change of variables of matrix into the input homogeneous polynomials. Let be such a polynomial in of total degree , and let be a matrix over . We need to compute

A natural idea is to interpolate from its values at , where is a subset of of cardinality . Such an interpolation is known to be computable in time ; see Lemma A.8 of the appendix. On the other hand the values of at may be obtained in time by [36, Proposition 5.7], for any fixed rational value . However, this approach suffers from two drawbacks: it takes neither the homogeneity of nor the structure of into account. At the end of Appendix A, we show the following sharper complexity result:

Proposition 2.15. Let be homogeneous of degree , and let be an matrix over . If and if a LU-decomposition of is given, then we may compute in time .

3.Fast multivariate modular composition

This section is devoted to multivariate modular composition, that is the evaluation of a multivariate polynomial in at a point in , where with monic and . As recalled in the introduction, no fast algorithm is known for this task over a general ring or field . For the Kronecker solver presented in this paper, the following particular instances of are needed: and , along with tangent numbers over theses rings. In this section, we design fast algorithms for these cases, on the basis of Kedlaya and Umans' ideas, and new variants from [36].

If , then is the finite field , whereas corresponds to the ring of the -adic integers truncated to precision . Until the end of the section we set

(3.1)

and

where is a monic polynomial of degree in .

3.1.Reduction to evaluation

Let represent a cost function for the multi-point evaluation of -variate polynomials over with partial degrees , total degree , and for evaluation points. The following algorithm reduces modular composition to multi-point evaluation.

Algorithm 3.1

Input
of partial degrees and of total degree ; in .

Output

.

Assumptions

.

  1. Write

    for the canonical preimage of in , where and belong to and have partial degrees in and in . In a similar manner write

    for the preimage of in .

  2. Build points whose reductions modulo are pairwise distinct.

  3. For and for , compute

  4. For compute and .

  5. For and for , compute

  6. Interpolate the polynomials and in , of partial degrees in and in , and such that and for all and .

  7. Return reduced modulo .

Proposition 3.1. Algorithm 3.1 is correct and takes time

whenever .

Proof. First we verify that

Therefore, for all and we obtain

Since has partial degrees in and in , we deduce

whence the correctness of the algorithm.

The rewriting in step 1 takes linear time. The construction of the points in step 2 costs . In step 3 we appeal to Lemma 2.2 in order to obtain

in time , and then to obtain

for in time . Notice that the required data reorganizations can be done in softly linear time thanks to Lemma 2.1.

The cost of step 4 is provided by Lemma 2.11,

and simplifies to by (2.1) and Lemma 2.6.

The derivations in step 5 incur by Lemma 2.9. Taking into account data reorganizations via Lemma 2.1, the total cost of step 5 is

In step 6, the interpolations amount to by Lemma 2.3. Finally step 7 takes additional time .

3.2.Main complexity bound

Our main complexity bound for modular composition is presented in the next theorem, under the condition that sufficiently many evaluation points are at our disposal. For convenience we recall the following lemma.

Lemma 3.2. [36, Lemma 4.7] For all positive integers we have

Theorem 3.3. Let be a fixed rational value, assume that and that

(3.2)

Let be of total degree and let be in , with monic in of degree . Then can be computed in time

Proof. Let be such that

First we examine the case . Lemma 3.2 and the fact that the function

is nondecreasing yield

so we have . By adapting Lemma 2.12 to , the cost of the naive modular evaluation is , since .

From now we assume that holds. We set . If is bounded then so is and the above adaptation of Lemma 2.12 may again be used to achieve a naive evaluation cost in . Consequently, we may further assume that is sufficiently large to satisfy

(3.3)

We perform the multivariate Kronecker segmentation of into which has variables and partial degrees as follows. We let

and introduce the Kronecker substitution map

We set and . Observe that we have and . From

and

we deduce that

Still by allowing to be sufficiently large we may further assume that

(3.4)

On the one hand, by [36, Proposition 5.1] and (3.3), the computation of takes time

On the other hand, for we compute

by binary powering, in time . We may now compute as

Since (3.4) ensures

the condition (3.2) allows us to combine Proposition 3.1 and Theorem 2.13. This yields the complexity bound

(using (3.4))

for a suitable constant . If , then this bound further simplifies to

Otherwise we have which implies that . In this case, the condition (3.2) allows us to combine Proposition 3.1 and Theorem 2.14, in order to obtain in time

which simplifies to thanks to .

Overall we have proved a complexity bound . Taking into account that , from (3.3), it follows that , which concludes the proof.

3.3.Extension of the residue field

In order to generalize Theorem 3.3 to small residue fields, namely whenever inequality (3.2) does not hold, we extend the cardinality of the ground ring. We will rely on the following construction.

Lemma 3.4. Let be a Galois ring, and let be an integer that is coprime with . Then we may compute a monic irreducible polynomial of degree in time with any fixed arbitrarily low probability of failure, or in expected time with a Las Vegas algorithm. Then we may build the Galois ring as , such that the following isomorphism holds:

for some polynomials and in . The polynomials , and can be computed in time . Each application of and takes time .

Proof. The polynomial can be obtained by using Ben-Or's randomized algorithm [21, Theorem 14.42]. Since and are coprime, a natural candidate for is the composed product of and (usually written ),

(3.5)

It is well known that is irreducible of degree . Let , that can be computed in time . We construct as the monic part of the resultant in and we write for the corresponding subresultant of degree in . For subresultants over Galois rings, we need to compute the necessary minors of the Sylvester matrix by means of Berkowitz' algorithm [5]. In this manner , , and can be obtained in time .

It is well known that is invertible modulo , so we compute and in time .

Let us consider an element represented by , and

where . Equivalently, we have

which corresponds to the usual Chinese remaindering formula associated to the factorization (3.5) of .

We observe that for all roots of . Consequently, for all roots of with , we have . It follows that actually represents . It can be computed in time .

In the other direction, given a preimage of , we obtain as , in time .

Remark 3.5. In the latter proof we could have appealed to faster algorithms to build irreducible polynomials, such as the ones of [16, 60, 67]. In addition, faster algorithms are known to obtain in specific cases; see [9, 28, 51]. For simplicity we have preferred to restrict to general well known algorithms, because the complexity bound of Lemma 3.4 is to be used only when is rather small.

The following corollary aims at completing Theorem 3.3 for small residue fields. It will not be used in the sequel because the extension of the residue field will instead be handled at the top level of the Kronecker solver.

Corollary 3.6. Let be a fixed rational value, let be as in (3.1), and assume that . Let be of total degree and let be in , with monic in of degree . Then can be computed using a probabilistic Las Vegas algorithm in expected time

Proof. Notice that the total degree of may be obtained in time by Lemmas 2.6, 2.8 and inequality (2.1). It remains to handle the case when (3.2) does not hold in Theorem 3.3. In other words, assume that

We first compute the smallest integer such that

that is

and we set

so is coprime to and

We next apply Lemma 3.4, from which we borrow the notation: of degree is obtained in expected time ; the computation of , , and requires time .

With this construction in hand we set

and

and we proceed as follows for the modular composition problem:

Adding up, we obtain the claimed complexity bound—up to replacing by from the outset.

4.Data structures and randomness issues

Consider a polynomial system over an effective field , which satisfies the regularity assumptions R1, R2, and R3. In the next section, we will recall the Kronecker algorithm that solves such a system. But before, we prepare the terrain by presenting a few concepts and data structures that will be necessary, and by discussing how to put the system into a sufficiently general position via a random linear change of coordinates.

Under our regularity assumptions, and letting , the coordinate ring has dimension and degree by the Bézout theorem. In particular the system admits distinct solutions in , which are all regular. After applying a generic affine change of coordinates, the Kronecker solver first solves the system then …, and so on until . We first study how to perform this change of variables. Next we explain how positive dimensional solution sets of the intermediate systems are represented.

4.1.Noether position

An homogeneous ideal of is said to be in Noether position when and is an integral ring extension of , where . Let be the column vector with entries . If is an invertible matrix over , then we write

If is generated by a proper regular sequence , then we say that a matrix puts the intermediate ideals in simultaneous Noether position if is in Noether position for all . Let us now examine the probability that this happens for an upper triangular matrix with ones on the diagonal and random entries above the diagonal.

Lemma 4.1. Let satisfy R1, R2, and R3. Given a finite subset of of cardinality , consider the matrix

where the are taken at random in . Then the probability that puts into simultaneous Noether position is at least .

Proof. We use the incremental method described in the proof of [18, Theorem 1.9] as follows (for historical references, we also refer to [18]). The values are taken such that the coefficient of in is non-zero. For such values the ideal is in Noether position.

Then we consider the determinant of the multiplication endomorphism by in : it belongs to and has degree . The values such that the coefficient of in is non-zero put into Noether position.

By induction we consider the determinant of the multiplication endomorphism by in : it belongs to and has degree . The values such that the coefficient of in is non-zero put into Noether position. By taking into account the fact that , the Schwartz–Zippel lemma [66, 69] leads to a probability of success at least

4.2.Primitive element

Let be an absolutely radical ideal of such that the coordinate ring is zero dimensional—here we consider the affine case when is not necessarily homogeneous. A polynomial is said to be a primitive element for if the projections of its powers generate as a -vector space. Let represent the degree of (i.e. its dimension as a -vector space).

Lemma 4.2. Under the above assumptions, given a finite subset of , the probability that a random vector induces a primitive element of is .

Proof. The minimal polynomial of in actually belongs to when the are regarded as parameters, and the total degree in the is ; see for instance [18, Corollary 3.4]. The points to be avoided are the zeros of the discriminant of , seen as a polynomial of total degree in . We conclude by the aforementioned Schwartz–Zippel lemma.

For a primitive element there exist unique polynomials such that , is monic and separable, , and

The polynomials are called the parametrization of by . Equivalently, we may define for , whence

These polynomials are called the Kronecker parametrization of by ; such a parametrization is uniquely determined by .

Lemma 4.3. Let satisfy R1, R2, and R3 and such that the are simultaneously in Noether position for . Given a finite subset of , the probability that a vector induces a primitive element common to all for is .

Proof. Each has dimension as a -vector space [18, section 2.4 and the relationship to the geometric definition of the degree of an algebraic variety in Corollary 3.10]. By the previous lemma and the fact that , the probability that is primitive for all is at least

4.3.Lifting fiber

Let satisfy R1, R2, and R3, be in simultaneous Noether position, and let

for . Thanks to Noether positions, the system has all its roots in the affine space. More generally given a point all the roots of are in the affine space. A point is called a lifting point if is absolutely radical.

Lemma 4.4. Under the latter conditions, given a finite subset of , the probability that a vector is a simultaneous lifting point for is .

Proof. Without loss of generality we may assume that is algebraically closed, so we may consider a primitive element that is common to all , thanks to Lemma 4.3. The minimal polynomial of in is homogeneous in of degree , and is monic and separable in . Any point such that the specialization of at is separable ensures that is absolutely radical; see for instance [18, Proposition 3.6]. The probability of picking such a point at random in is at least by the Schwartz–Zippel lemma. The probability that satisfies the requested property is thus at least

again by using .

4.4.Random parameter summary

Assume that we are given a polynomial system that satisfies R1, R2 and R3. In order the make the Kronecker algorithm work, the following conditions are required for :

V1

is in Noether position,

V2

the ideal of is absolutely radical.

Lemma 4.5. Let satisfy R1, R2 and R3, and let be a finite subset of . If we take in at random, as well as in , and if we let

then V1 and V2 are satisfied with probability after replacing by .

Proof. We first pick up a matrix at random as in Lemma 4.1 in order to get simultaneous Noether positions. Then comes the choice of the lifting point, for which we use the strategy presented in the proof of Lemma 4.4.

4.5.A posteriori verification

At the end of the resolution, unless the execution has failed, we expect to obtain an invertible matrix over and polynomials such that is separable of degree , the have degree , and

On the other hand, being absolutely radical, its roots are all regular, is separable, and the value of the Jacobian matrix of in at is invertible modulo ; see for instance [18, section 4.2].

One natural question at the end of a probabilistic resolution process is to determine whether the computed solution set is correct or not. In the case of sufficiently generic systems, namely satisfying and R2, it is sufficient to verify that regular solutions have been found. This idea is formalized in the following proposition, which turns a probabilistic resolution algorithm into a Las Vegas one.

Proposition 4.6. Consider homogeneous polynomials in , an invertible matrix , a linear form , and polynomials in that satisfy the following conditions:

Then, R1 and R2 are satisfied, the ideal is in Noether position, and form a parametrization of by .

Let be a fixed rational value, and assume that R3 holds. If and if

then the above conditions can be checked in time .

Proof. The assumptions ensure that the system admits distinct isolated regular solutions in the affine subspace defined by . By Bézout's theorem, this system has therefore no solution at infinity, namely in the hyperplane defined by .

For , let denote the variety of zeros of . Let us first show that R1 is satisfied. If it were not, then there would exist a smallest index for which and . In other words would have a (non-empty) equidimensional component of dimension and a possibly empty component of dimension . Heintz' version of the Bézout theorem [32, Theorem 1] would imply that

whence . Therefore the number of isolated roots of would be

which contradicts the fact that admits distinct isolated regular solutions.

Let be in . We consider a non-zero homogeneous polynomial in . We write into with primitive as a polynomial of and a monomial in . In particular is monic in and vanishes at all the points of , so it belongs to . This shows that the class of in is integral over when seen as an element of . We deduce that is in Noether position; for instance by using [18, Theorem 1.12].

Now let us show that R is satisfied. Consider an irreducible component of for some index and then a point in , which is non-empty in the projective setting. This point is a solution of , and as such, the value of the Jacobian matrix of at has full rank by hypothesis. Therefore, the Jacobian matrix of has full rank over a Zariski dense subvariety of . It follows that the primary component associated to is necessarily prime. On the other hand, R implies that is unmixed by [53, chapter 7, section 17]; in other words, has no embedded primary ideal. Consequently is absolutely radical.

Assume now that , and let us examine complexities. Testing the separability of boils down to computing its gcd with , which takes time . We compute in time by Proposition 2.15. Then we compute the Jacobian matrix in of in time by Lemma 2.9. The evaluations of all the and of at in can be done in time by means of Theorem 3.3—recall that these homogeneous polynomials to be evaluated are already represented by their specialization at . Finally the determinant of the latter value of is obtained without division thanks to Berkowitz' algorithm [5] in time . Testing the invertibility of this determinant simply requires additional time .

5.The Kronecker solver

Let be an effective field and let be homogeneous polynomials in . Throughout this section, we assume that conditions R1, R2, R3, V1, and V2 are satisfied. The Kronecker solver is incremental in the number of equations to be solved. More precisely, at stage , we assume that a parametrization of the ideal

by a primitive element is given, so that

We will build primitive elements for all intermediate ideals from a unique tuple . In order to obtain a similar parametrization for , we apply the two following steps.

Lifting step

The parametrization of is lifted into a parametrization of

of the form

where is a monic polynomial of degree in , and the are in . It turns out that actually belong to and have total degrees ; see for instance [18, Corollary 3.4]. We will compute them by a variant of the Newton–Hensel lifting strategy over .

Intersection step

Geometrically speaking represents the affine curve . The intersection of this curve with the hypersurface corresponds to . The minimal polynomial of modulo will be computed as a resultant, and the rest of the parametrization of is completed by means of a suitable deformation technique. We define to be such that this parametrization is of the form

5.1.Lifting step

The lifting step essentially relies on a multivariate variant of the Hensel lemma (or, equivalently, on a multivariate power series analogue of Newton's method). The Jacobian matrix of in is written

The identity matrix of size is written .

Algorithm 5.1

Input
and the parametrization of by .

Output

The Kronecker parametrization in of by .

Assumptions

satisfy R1, R2, R3, V1, V2.

  1. Initialize , , ,…, .

  2. Initialize with the inverse of modulo .

  3. While do the following:

    1. update

      and over ;

    2. set ;

    3. compute

      over ;

    4. compute ;

    5. update computed over ;

    6. for from to ,

      update computed over .

  4. For from to , compute over .

  5. Return truncated to order and then regarded in .

Proposition 5.1. Algorithm 5.1 is correct. Let be a fixed rational value. If and if

then it takes time .

Proof. This algorithm directly comes from [27, section 4]. We do not repeat the correctness proof but focus on the complexity analysis in the Turing model for the dense representation of polynomials.

The partial evaluations of at are achieved in time by Lemma 2.10 and (2.1)—recall that the homogeneous polynomials are actually represented by their specialization at and their total degree. All the partial derivatives needed for the Jacobian matrix can be obtained in time by Lemma 2.9.

In step 2, we compute in time

by Theorem 3.3. Then the inversion of is performed by Berkowitz' algorithm in time .

In step 3 we regard and the entries of as polynomials in : their evaluations at take time by Theorem 3.3. The remaining computations in steps 3 until 5 take time .

5.2.Intersection step

For the intersection step we follow the method designed in [27, section 6]. We first compute a deformation of the parametrization of by the primitive element

where the are new variables satisfying for all . Let and let be a shorthand for . The extension of to is parametrized by as follows

where is monic in of degree , and the have degree in , for . Of course, respectively coincide with modulo . In addition we know that the total degrees of in and is ; see [18, section 3].

Algorithm 5.2

Input
The Kronecker parametrization of and .

Output

The Kronecker parametrization of .

Assumptions

satisfy R1, R2, R3, V1, V2.

  1. Compute .

  2. Compute .

  3. For from to do the following:

    1. compute ;

    2. compute ;

    3. compute .

  4. Return truncated to and regarded in .

Proposition 5.2. Except for values of in , Algorithm 5.2 is correct. If , then it takes time .

Proof. This algorithm corresponds to [27, Algorithm 8]. Its correctness is explained in [27, section 6.4]. We adapt the complexity analysis to our context. In steps 2 and 3.a the inverse of modulo is only required modulo . If we were performing this modular inversion over then we would have to handle rational functions of degrees growing with . In order to avoid this coefficient swell we use truncated power series instead of rational functions. In fact, we compute the inverse of modulo for specialized to , then we use classical Hensel lifting to recover . This method works fine whenever the resultant of and does not vanish at . This resultant has degree in .

Algorithm 5.2 needs to shift by in the input and by in the output, which can be done in time using a divide and conquer algorithm; see for instance [7, Lemma 7]. Then is obtained in time . The rest of the algorithm performs polynomial products of cost .

If is a polynomial in of total degree in and , and assuming , then we write

for the polynomial in obtained after replacing by . This substitution can be performed efficiently as follows:

  1. Compute with a softly linear number of operations in ;

  2. Compute . This substitution is applied to each homogeneous component of seen in . In fact if is the homogeneous component of degree then we are led to compute , which basically reduces to computing , and that corresponds to a univariate polynomial shift. Such a shift is known to take softly linear time, as mentioned above.

If then this substitution in takes time . We are now ready to complete the intersection step.

Algorithm 5.3

Input
The Kronecker parametrization of and .

Output

The Kronecker parametrization of .

Assumptions

satisfy R1, R2, R3, V1, V2.

  1. Compute .

  2. Compute and .

  3. For from to compute:

    1. ;

    2. .

  4. Let .

  5. Compute with in .

  6. Return truncated modulo and seen as polynomials in .

Proposition 5.3. Let be a finite subset of . If we take in and then in at random, then Algorithm 5.3 works correctly as specified with probability . Let be a fixed rational value, and assume that . If , and if

then Algorithm 5.3 takes time .

Proof. The algorithm is borrowed from [27, Algorithm 7]. Again we only focus on the complexity and probability analyses in our context.

First we need to compute along with the corresponding Bézout relation that yields . We wish to apply a fast subresultant algorithm, namely either the one of [21, chapter 11] or the one of [51]. For this purpose, we need to ensure the following properties:

In fact, these coefficients are the only quantities that need to be inverted during the execution of the aforementioned fast subresultant algorithms.

We claim that these properties are met for sufficiently generic values of the . In order to prove and quantify this claim, we introduce

where the are new independent variables, and set . The extension of to is parametrized by as follows

where

For the proofs of these facts we refer the reader to [18, section 3]. It follows that are the respective specializations of at . We further introduce

By the specialization property, the subresultants of and , seen as polynomials of respective degrees and in , are the specializations of those of and at . Such a subresultant coefficient of and is a specific minor of the Sylvester matrix of and , of size at most . Therefore such a subresultant coefficient is a (non-reduced) rational function with denominator and numerator in of total degree in . Consequently the product of all these non-zero numerators yields a non-zero polynomial of total degree

such that implies that the non-zero subresultant coefficients of and are invertible; in particular the resultant is invertible and so are the leading coefficients of and .

The fast computation of raises the same issue, and requires the same kind of probability analysis. We thus introduce

which belongs to , has degree in , in , and total degree in . In this way

is the specialization of at .

A subresultant coefficient of and , seen as polynomials of respective degrees and , is a (non-reduced) rational function with denominator and numerator in of total degree in . Consequently there exists a non-zero polynomial of total degree

such that implies that all the non-zero subresultant coefficients of and and the leading coefficients of and are invertible.

As in the proof of Lemma 4.2, there exists a non-zero polynomial of total degree such that implies that is a primitive element for . This ensures the correctness of the output.

Now assume that . This happens with probability

whenever is taken at random in .

The subresultant coefficients of and have degree in , so except for values of , we can compute modulo in time , by means of the fast subresultant algorithm. A subresultant coefficient of and has degree in , so except for values of , can be computed by means of the fast subresultant algorithm modulo in time . Consequently a suitable value for is found at random in with probability of success

All the substitutions take time according to the above discussion. All the shifts by and in totalize time .

The steps 3.b take time in total. In step 4 we first specialize the variables to in time , after which the rest of the evaluation of

is decomposed into evaluations in for . More precisely, for the evaluation in we ignore the variables for and we may thus appeal to Theorem 3.3 in order to achieve time

5.3.Total complexity

Putting together the propositions of this and the previous sections, we obtain the following complexity bound.

Theorem 5.4. Let be a fixed rational value, let and let be homogeneous polynomials in of total degrees such that:

Then an invertible matrix , a primitive element and a Kronecker parametrization of can be computed in time , with a probability of failure that is bounded by .

Proof. This result is a consequence of the Kronecker solver that was recalled at the beginning of this section. First we pick up an invertible matrix of size with entries at random in as described in Lemma 4.5, in order to ensure that after replacing by conditions V1 and V2 hold with a probability at least

Computing takes time by Proposition 2.15.

On the other hand the probability that are suitable for all the intersection steps is at least

by Proposition 5.3. The probability that a random value of is suitable for all stages of the solver is at least

by Propositions 5.2 and 5.3.

Overall the probability of failure is bounded by . Notice that the constant is not intended to be optimal. The total complexity follows by combining Propositions 5.1, 5.2 and 5.3.

Corollary 5.5. Let be a fixed rational value, let and let be homogeneous polynomials in of total degrees such that R1, R2, and R3 are satisfied.

Then the Kronecker algorithm computes an integer , the finite field , the images of into , an invertible matrix with entries in , a primitive element and a Kronecker parametrization of over in time with any arbitrarily small fixed probability of failure, or in expected time with a Las Vegas algorithm.

Proof. The case

is already handled in Theorem 5.4, and corresponds to . So we focus on the opposite case , for which . We compute the first integer such that

that is

Then we set

so is coprime to and we have . Now we appeal to Lemma 3.4: we construct in time with probability of failure ; then the casts between and can be achieved in time .

With this construction at hand we proceed as follows:

Using Proposition 4.6, the correctness of the result can be verified in time as well. We are thus able to reach any arbitrarily small fixed probability of failure by repeating the resolution sufficiently many times. Altogether, this leads to a Las Vegas algorithm that satisfies the claimed expected complexity bound—of course, needs to be replaced by from the outset to conclude the proof.

Remark 5.6. If some of the are , then it is far more efficient to compute:

such that for and for . In this way we are reduced to solve a “smaller” system with all degrees . This simplification in the input system can be achieved in time by means of Proposition 2.15. We leave out the details for the sake of conciseness.

6.Systems over the rational numbers

From now we turn to solving polynomial systems with coefficients in , still under assumptions R1, R2, and R3. We still write , and we appeal to the algorithm designed in [27, section 4.6]: we first perform the resolution modulo a suitable prime number , and then lift the parametrization of the solution set over the -adic numbers by means of Newton–Hensel lifting. Roughly speaking, once the bit size exceeds twice the maximum bit size of the integers of the Kronecker parametrization over , then this parametrization can be reconstructed from its -adic expansion.

In order to formalize this approach, we first need to estimate the bit size of the Kronecker parametrizations in terms of the input size. Then we design a probabilistic algorithm that generates suitable values for with high probability.

For a scalar, a vector, a matrix, or a polynomial over we write for the maximal norm of its entries or coefficients. Then the height of , written , is defined as

Until the end of the text we set

6.1.-adic Hensel lifting

In section 5.1 we have described an algorithm for lifting power series in a Kronecker parametrization. In the following paragraph we adapt the method to lift -adic integers instead. The algorithm originates from [27, section 4.6]. The Jacobian matrix of in is still written

Algorithm 6.1

Input
and the parametrization of by ; an integer .

Output

The Kronecker parametrization in of by .

Assumptions

satisfy R1, R2, R3, V1, V2.

  1. Initialize , , ,…, .

  2. Initialize with the inverse of modulo .

  3. While do the following:

    1. update and , over ;

    2. set ;

    3. compute over ;

    4. compute over ;

    5. replace computed over ;

    6. for from to ,

      replace computed over .

  4. For from to , compute over .

  5. Return .

Proposition 6.1. Algorithm 6.1 is correct. Let be a fixed rational value. If

then Algorithm 6.1 takes time , where .

Proof. This algorithm directly comes from [27, section 4.6]. We do not repeat the proof of the correctness but focus directly on its bit complexity for the dense representation of input polynomials.

Before all we may compute the partial derivatives of the in time by Lemma 2.9 to any precision . In step 2, we compute in time

by Theorem 3.3. Then the inversion of modulo is performed by Berkowitz' algorithm in time .

In step 3 the evaluations of and at at a given precision take time , again in view of Theorem 3.3. Since we keep doubling until we reach the precision , the total cost is bounded by . The rest of the computations in steps 3 and 4 take time .

6.2.Chow form and height

Let be a zero dimensional subvariety of . “The” Chow form of is the polynomial that satisfies

where the are taken in . The Chow form is defined up to a scalar factor in , and is homogeneous of total degree .

From now we assume that is the variety of , where is an invertible integer matrix such that is in Noether position. In this setting, is included in the affine subspace defined by , and we have . We then define the normalized Chow form of as the -multiple of having the coefficient of equal to . The normalized Chow form belongs to .

In order to bound bit sizes of Kronecker parametrizations of , we first need to bound the bit size of the coefficients of . For this purpose we follow a construction due to Philippon [62], and introduce

where denotes the -Mahler measure of defined as

where is the usual Haar measure of mass 1 over the sphere

So is to be understood as a real -dimensional integral.

On the other hand, the usual Mahler measure of is defined as an -dimensional complex contour integral

The relationship between these two measures is given in [52, Theorem 4]:

which implies

(6.1)

Then we appeal to [43, Corollary 2.10, p. 553] which ensures the existence of a rational number such that has all its coefficients in and

(6.2)

To relate the Mahler measure to the actual bit size of we use [43, Lemma 1.1, p. 529]:

(6.3)

Combining the three latter inequalities leads to

(by (6.3))
(by (6.1))
(by (6.2))
(6.4)

6.3.Bit size of Kronecker parametrizations

Let be a Kronecker parametrization of by the primitive element taken in . It relates to the Chow form of as follows:

Since belongs to , all the coefficients of and of for also belong to , and their bit size can be bounded as follows. By and inequality (2.1), the Chow form admits at most terms, whence

(6.5)
(6.6)
(6.7)

where is a shorthand for . Now we estimate how the coefficients of increase because of the change of variables .

Lemma 6.2. Let be homogeneous of total degree , and let be a matrix over . Then we have

Proof. The proof is immediate by using and inequality (2.1).

Lemma 6.3. For all we have .

Proof. This is a well known unconditional variant of Stirling's formula:

Lemma 6.4. Assume that R1, R2, and R3 hold, and let be an matrix over such that is in Noether position. Then we have

In addition, for a Kronecker parametrization of , and with as above, then we have

Proof. From Lemmas 6.2 and 6.3 we obtain

Consequently,

Combining this bound with (6.4)–(6.7) and using , we deduce

Finally we use to bound by , which concludes the proof.

6.4.Lucky primes

Given an input polynomial system satisfying R1, R2, and R3, and having all its coefficients in , we are led to characterize prime numbers for which the system admits isolated simple roots when considered modulo . We will rely on the a posteriori verification criterion from Proposition 4.6, in order to ensure that a resolution modulo can be lifted over the rational numbers.

Lemma 6.5. For all we have .

Proof. The inequality is an equality for . For we have

whence by induction on .

Lemma 6.6. (Hadamard's inequality) Let be a matrix over the real numbers, then we have , where represents the -th column vector of .

Lemma 6.7. Let be homogeneous polynomials in which satisfy R1, R2, and R3. There exists a positive integer such that

and for any that does not divide the sequence satisfies R1, R2 and R3 over .

Proof. By Lemma 4.5 there exists an invertible matrix with integer entries such that and is in Noether position. On the other hand by Lemma 4.2 there exists a primitive element for with integer coefficients and such that . Let represent the corresponding Kronecker parametrization, and let and be as above. Let be a non-zero coefficient in of a term of degree for , let be the Jacobian matrix of with respect to , and

By construction, , and thus are positive integers. Now if does not divide , then for , and the system admits isolated regular roots modulo . Consequently satisfy R, R, and R3 modulo by Proposition 4.6.

To conclude the proof it remains to bound . First, we may further rewrite a bound for the quantity introduced in Lemma 6.4, by making use of :

By applying Lemma 6.6 on the Sylvester matrix of and , we obtain

whence

(6.8)

As to , we deduce from Lemma 6.4 that

On the one hand, we have

hence

On the other hand, Lemma 6.5 gives us

From Lemma 6.4 and , it follows that

Let denote the group of permutations of . Then we have

Again by Hadamard bound we deduce that

whence

(6.9)

The conclusion follows from adding right-hand sides of (6.8), (6.9), and .

Remark 6.8. Theorem 2.1 of [17] provides a more general statement than Lemma 6.7 for the affine case. Extensions to the quasi-projective case can be found in [22].

Lemma 6.9. Assume that for . Let be an invertible matrix over and let be a linear form in . Let be a prime number, and let be polynomials in such that:

Then satisfy R1, R2, and are in Noether position (over ), is a primitive element for , and coincide modulo with the Kronecker parametrization of over by .

Proof. By Proposition 4.6, the residues modulo of form a Kronecker parametrization of . We may use Algorithm 6.1 from a theoretical point of view in order to prove the existence of univariate polynomials over the -adic numbers , satisfying the following properties:

Consequently, Proposition 4.6 ensures that is a Kronecker parametrization of over by the primitive element . By uniqueness, it coincides with the image over of the Kronecker parametrization of over by . Using again Proposition 4.6 with the parametrization over the rationals concludes the proof.

Now we address the question of obtaining a suitable prime number.

Lemma 6.10. There exists a probabilistic algorithm which, given positive integers and such that , computes a prime in the range , which does not divide , with probability of success , and in time .

Proof. This is a consequence of [21, Theorem 18.10, part (i)]: we just appeal to the deterministic primality test designed in [1] in order to ensure that is always prime in return. This test works in polynomial time.

6.5.Top level algorithm over the rational numbers

The idea behind the Kronecker solver over is as follows. For a suitable prime , we use the Kronecker algorithm to solve the input system over . If the algorithm finishes with success, then we appeal to the a posteriori verification criterion of Lemma 6.9. If the criterion passes, then we may lift the Kronecker parametrization over the -adic integers to arbitrary precision by using Algorithm 6.1, and we know that the -adic parametrization is the image of the rational one. Consequently, as soon as is sufficiently large, we can reconstruct the parametrization over .

Let us recall that a rational number , with and , is uniquely determined by its -adic expansion to precision whenever and ; see for instance [68, section 4]. Using the fast Euclidean algorithm from [21, Theorem 5.26] this reconstruction takes . The choices of and the precision needed by the solver are made precise in the following algorithm. We recall that is a fixed positive rational number, thought to be small.

Algorithm 6.2

Input
homogeneous in .

Output

An invertible matrix over such that is in Noether position, a linear form , and a Kronecker parametrization by of .

Assumption

satisfy R1, R2, and R3.

  1. For each from to , compute the maximum bit size of the coefficients of .

    Compute and a positive integer such that .

    Compute such that

  2. Compute a prime number at random in via Lemma 6.10.

  3. Call the Kronecker algorithm underlying Theorem 5.4 with the images of in . If the resolution fails then go to step 2.

  4. Let be the data returned by the Kronecker algorithm in step 3. If , or if is not separable, or if , or if one of the does not vanish at modulo , or if the value of the Jacobian matrix of in at is not invertible modulo , then go to step 2.

  5. Let (resp. ) be the canonical preimage of (resp. of ) with entries in . Let be the first integer such that

    Compute over , and use Algorithm 6.1 to obtain the Kronecker parametrization by of over .

  6. By rational reconstruction, compute the unique polynomials in that coincide with modulo .

Notice that is whenever all the have their coefficients in . Therefore in the following complexity estimates we shall write in order to make complexity bounds valid in this particular case.

Theorem 6.11. Algorithm 6.2 is correct, as a Las Vegas algorithm, and takes expected time

Proof. By Lemma 6.10 the prime determined in step 2 does not divide the integer defined in Lemma 6.7 with probability . In other words, the sequence satisfy properties R1, R2, and R3 with probability . Thanks to , Theorem 5.4 ensures that the Kronecker solver modulo succeeds with probability . It follows that the expected number of times that the algorithm repeats steps 2 to 4 is bounded.

Once the algorithm passes step 4, Lemma 6.9 ensures that the parametrization modulo may be lifted to any arbitrary precision, and that the parametrization over may be recovered whenever the precision is sufficiently large. According to the preceding discussion on rational reconstruction, we need that

with defined in Lemma 6.4; it satisfies

with and being . This concludes the proof of correctness.

The computation of needs time by Lemmas 2.6 and 2.8. Then , and can be computed in time , which is negligible since

The cost of step 2 is by Lemma 6.10, which is again negligible.

By Theorem 5.4 step 3 takes time

since . Proposition 4.6 then provides us with the cost

for step 4. Computing over in step 5 takes by Proposition 2.15. The cost of the -adic Hensel lifting is given in Proposition 6.1:

where . We verify that

Step 6 requires additional time .

The version of the Kronecker solver summarized in Algorithm 6.2 can be optimized by using ideas previously developed for the case of input polynomials given by straight-line programs [27]; see implementation details in the C++ library geomsolvex of Mathemagix [37].

A first optimization consists in minimizing the height of the parametrization over the rationals. For this purpose it is worth finding a matrix and a primitive element of small height. This search may be achieved efficiently modulo , and we refer the reader to [50] for the description of the underlying algorithms.

A second optimization concerns the early termination of the lifting stage over the -adic integers. In fact one may try to perform rational reconstruction before the bound on the precision is actually reached. If all the reconstructions succeed, then it suffices to verify that the Kronecker parametrization actually describes the solution set. A probabilistic method to do this consists in evaluating the equations at the parametrization modulo an independent random prime number with a bit size similar to the one of . This verification may also be done deterministically over the rationals, but at a higher cost.

For an input system in variables of degrees , the quantity is of the order . Therefore with , Algorithm 6.2 involves primes of bit size close to 52 bits. In fact, the bounds used in the design of Algorithm 6.2 have been simplified and overestimated for the sake of the presentation. Sharper bounds should be considered in practice, in order to ensure that fits into 32 bits for small and moderate systems, and 64 bits for larger ones.

Let us finally mention that the present bound for the bit size of the Kronecker parametrization over the rationals turns out to be essentially sharp for generic systems, according to [54, Theorem 3.1]. Consequently the complexity reached within Theorem 6.11 is nearly linear in the “expected” size of the output, from the asymptotic point of view. This fact is illustrated by the following example.

Example 6.12. Consider the system, adapted from [12],

where and . Let be a root of

and set

so that is clearly a zero of . For , we observe that

It follows that is also a zero of . The value of the Jacobian matrix of in at is

By expanding with respect to the last column, we obtain

Consequently is invertible. From

Proposition 4.6 thus ensures that satisfies R and R, that it is in Noether position, and that it is parametrized by the primitive element , as follows:

7.Conclusion

The Kronecker solver was already known to be competitive both in theory and practice for polynomial systems represented by straight-line programs; see for instance timings in [2, 27]. Under suitable genericity assumptions and for suitable ground fields, we have shown in this paper that this solver also leads to improved probabilistic asymptotic complexity bounds for the dense representation of input polynomials.

Our main results concern finite fields and rational numbers, but our methods can in principle be extended to any ground field over which fast multivariate multi-modular composition is available. Unfortunately, it is still a major open problem to design an algorithm for modular composition over arbitrary ground fields.

Another major problem concerns the practical efficiency of algorithms for modular composition. This holds in particular for the recent fast algorithms for finite fields designed by Kedlaya and Umans [42], and revisited in [36]; we refer the reader to the concluding section of [36] for a quantitative discussion. Consequently we do not expect our Theorem 3.3 to be of practical use, and we cannot claim our new main complexity bounds for polynomial system solving to be relevant in practice.

Nevertheless, it is important to point out that Theorem 3.3 uses modular composition as a blackbox. Whenever faster algorithms for this task do become available, they can directly be used in our algorithms. As one possible research direction, it is worthwhile to investigate the replacement of generic modular compositions by specific ones, such as those studied in [33, 34] and which feature more promising performances.

The Kronecker solver admits different extensions for when genericity assumptions R1 and R2 are no longer satisfied: see for instance [39, 50], and more references in [18]. In this case, the system might have an infinite number of solutions, and one may express them in terms of a union of equidimensional or irreducible algebraic varieties. We expect that these extensions could be revisited for the dense representation of input polynomials, and plan to investigate the details in future work.

Yet another important question for systems over the rational numbers concerns the complexity of finding numerical approximations of the roots. In generic cases we have proved in Theorem 6.11 how to compute Kronecker representations of solution sets in quasi-optimal time. This implies that numerical roots may also be computed in quasi-linear time but whenever the requested precision is “sufficiently large”, by means of fast univariate polynomial solvers. It would be interesting to quantify the latter “sufficiently large”, and to compare to the state of the art of numerical polynomial solvers.

Appendix A.Linear changes of variables

This appendix is devoted to subjecting a multivariate polynomial to a linear change of variables. More precisely, given and an matrix over a commutative ring , then we wish to compute

The fast algorithms that we propose below do not seem to be available in the literature. They are suitable for any coefficient ring with sufficiently many elements, and they are also well suited for homogeneous polynomials.

A.1.Algebraic complexity model

In this subsection we focus on the algebraic model (computation trees for instance), we let be an effective commutative ring, and is a cost function such that two polynomials in may be multiplied with cost . The evaluation of a multivariate polynomial at points in a block of points , where is a finite subset of , is usually achieved by the successive use of fast univariate evaluations, as recalled in the following lemma.

Lemma A.1. Let , let be of partial degree in for , and let be a subset of of cardinality . Then, all the values of at can be computed with arithmetic operations in .

Proof. We interpret as a univariate polynomial in ,

We evaluate at recursively. Then, for each , we evaluate at all the points of , with a total cost . Denoting by the cost of the algorithm in terms of operations in , we thus obtain

By induction over , it follows that

which implies the claimed bound.

The next lemma, also well known, concerns the corresponding interpolation problem.

Lemma A.2. Let , let be pairwise distinct points in such that is invertible whenever , let be a family of values in for running over . The unique polynomial of partial degrees and such that for all can be computed with arithmetic operations in , including inversions.

Proof. Again we interpret as a univariate polynomial in ,

(A.1)

For all we interpolate the values with operations in . We then recursively interpolate and form as in (A.1). The total cost is obtained as in the proof of the previous lemma.

The aim of the following proposition is the fast evaluation of at a set of points of the form , for any matrix and any vector .

Proposition A.3. Let , let be of partial degree in for , let be a subset of of cardinality such that is invertible whenever , let be a matrix over , and let . Let be the column vector with entries . If an LU-decomposition of is given, then and can be computed with arithmetic operations in , including inversions.

Proof. We write . We first assume that is upper triangular, and we partition into

where and . We compute

for using operations in . For , we then evaluate at by induction. The base case takes constant time . Consequently, for any , the total number of operations in is , by the same argument as in the proof of Lemma A.1. We recover with operations in by Lemma A.2.

If is lower triangular then we may revert of the variables in and the columns of in order to reduce to the upper triangular case. Alternatively, we may adapt the latter decomposition of the set of points, as follows:

where and . So we compute

and evaluate at by induction, for .

Finally if is general, then it suffices to use the given LU-decomposition, where is lower triangular with on the diagonal, and is upper triangular. In fact we have , so we compute and then and .

In the next lemma the same technique is adapted to homogeneous polynomials.

Lemma A.4. Let be homogeneous of degree , let be a matrix over , and let be a subset of of cardinality such that is invertible whenever . If an LU-decomposition of is given, then can be computed with arithmetic operations in .

Proof. Assume first that is lower triangular and let . We are led to compose with

by means of Proposition A.3. If is upper triangular then it suffices to revert the variables in , and the columns of , in order to reduce to the lower triangular case. Alternatively, we may set and compose with

in order to obtain . Finally, for any , it suffices to use the given LU-decomposition.

Proposition A.5. Let be homogeneous of degree , let be an matrix over , and let be a subset of of cardinality such that is invertible whenever . If an LU-decomposition of is given, then can be computed with arithmetic operations in .

Proof. The total number of coefficients in is by inequality (2.1). We decompose

(A.2)

where is made of the terms of which are multiple of , then is made of the terms of which are multiple of ,…, and finally is made of the terms of which are multiple of (that is a -multiple of a power of ). In this way, we are led to compute for , with of degree ; this requires operations in , by Lemma A.4. Then can be recovered with further operations.

Remark A.6. If one can use specific sequences of points , for instance in geometric progressions, then multipoint evaluations and interpolations in one variable and in degree over cost by means of [10], that saves a factor of in the above complexity estimates.

A.2.Coefficients in a Galois ring

For the purpose of the present paper, we need to adapt the results of the previous subsection to the case when is the Galois Ring , and in the context of Turing machines. In the next lemmas we use the lexicographic order on , written , defined by

In terms of Turing machines, we need the following variants of Lemmas A.1 and A.2.

Lemma A.7. Let , let be of partial degree in for , and let be values in . Then, the values for running over in the lexicographic order can be computed in time

Proof. The proof follows the one of Lemma A.1 while taking data reorganizations into account. More precisely, using one matrix transposition, we reorganize the values of the after the recursive calls into the sequence of

for running over in the lexicographic order . Then, after the multi-point evaluations of , we need to transpose the array made of the values of , in order to ensure the lexicographic ordering in the output. The cost of these transpositions is by Lemma 2.1, which is negligible.

Lemma A.8. Assume and . Let be pairwise distinct values in such that is invertible modulo for all , and let be a family of values in for running over in the lexicographic order . The unique polynomial of partial degree in for , and such that for all in , can be computed in time

Proof. The proof follows the one of Lemma A.2, by doing the data reorganizations in the opposite direction from the one in the proof of Lemma A.7.

From now, for convenience, we discard the case . In this way, whenever , we may use .

Proposition A.9. Assume and . Let be of partial degree in for , and let be a matrix over . If an LU-decomposition of is given, then can be computed in time .

Proof. We first generate a subset of of cardinality in time ; this ensures the invertibility of for . The proof then follows the one of Proposition A.3 while taking data reorganizations into account. When is upper triangular, the computation of requires the multi-point evaluation of regarded in : we may simply appeal to the fast univariate algorithm because it only involves additions, subtractions and products by elements of over the ground ring . Consequently may be obtained in time , by Lemma 2.5. In addition, the array of values of the must be transposed at the end, in order to guarantee the lexicographic ordering necessary to interpolate .

When is lower triangular, the data reorganization costs essentially the same, except that the computation of takes time by Lemmas 2.11 and 2.5.

Before achieving the proof of Proposition 2.15, we further need the following lemma in order to change the representation of a homogeneous polynomial.

Lemma A.10. Let be a homogeneous polynomial of degree in , represented as before by and , and let . Then, for any we can compute in time .

Proof. For simplicity the proof is done for , but it extends in a coefficientwise manner to any . A sparse representation of is made of a sequence of pairs of coefficients and vector exponents. More precisely, if then a sparse representation of it is the sequence of the pairs , for all the non-zero coefficients . The bit size of a vector exponent is , and therefore the bit size of a sparse representation of is by (2.1).

In order to prove the lemma, we first convert , given in dense representation, into a sparse representation. When the sparse representation of may be obtained in time . Otherwise and we regard in ,

and recursively compute the sparse representation of for . These representations may naturally be glued together into a sparse representation of , in time , by adding the exponent of into each exponent vector. A straightforward induction leads to a total time for the change of representation of . Then the sparse representation of may be deduced with additional time by appending the exponent of needed for homogenization.

Second, from the latter sparse representation of we may simply discard the exponents of and multiply the coefficients with the corresponding powers of , in order to obtain a sparse representation of in time .

Finally it remains to construct the dense representation of from its sparse representation. To this aim we sort the sparse representation in increasing lexicographic order on the exponent vectors in time . We next compute the dense representation by induction over . Writing

the sparse representations of are computed by induction, after removal of the powers of . The induction ends when , in which case the conversion to dense representation requires time . In total, the dense representation of can be computed in time .

Proof of Proposition 2.15. We follow the proofs of Lemma A.4 and Proposition A.5, still while taking into account the cost of data reorganizations.

In the proof of Lemma A.4, the cost of obtaining and is given by Lemma A.10, that is .

In the proof of Proposition A.5 we first need to compute the decomposition (A.2) of . The polynomial

is represented by

and . Consequently may be easily obtained in time . Then the rest of the decomposition is obtained from , recursively. The total cost for obtaining all the is therefore bounded by .

For any , any , and any , the computations of and of take time since their supports have cardinality by (2.1).

Finally, from

we obtain the representation of as

using additional time . The cost of the data reorganizations in the proof of Proposition A.5 is negligible.

Bibliography

[1]

M. Agrawal, N. Kayal, and N. Saxena. PRIMES is in P. Ann. Math., pages 781–793, 2004.

[2]

B. Bank, M. Giusti, J. Heintz, G. Lecerf, G. Matera, and P. Solernó. Degeneracy loci and polynomial equation solving. Found. Comput. Math., 15(1):159–184, 2015.

[3]

M. Bardet. Étude des systèmes algébriques surdéterminés. Applications aux codes correcteurs et à la cryptographie. PhD thesis, Université Pierre et Marie Curie - Paris VI, 2004. https://tel.archives-ouvertes.fr/tel-00449609.

[4]

M. Bardet, J.-C. Faugère, and B. Salvy. On the complexity of the Gröbner basis algorithm. J. Symbolic Comput., 70:49–70, 2015.

[5]

S. J. Berkowitz. On computing the determinant in small parallel time using a small number of processors. Inform. Process. Lett., 18:147–150, 1984.

[6]

J. Berthomieu, J. van der Hoeven, and G. Lecerf. Relaxed algorithms for p-adic numbers. J. Théor. Nombres Bordeaux, 23(3), 2011.

[7]

J. Berthomieu, G. Lecerf, and G. Quintin. Polynomial root finding over local rings and application to error correcting codes. Appl. Alg. Eng. Comm. Comp., 24(6):413–443, 2013.

[8]

A. Bostan, F. Chyzak, M. Giusti, R. Lebreton, G. Lecerf, B. Salvy, and É Schost. Algorithmes Efficaces en Calcul Formel. Frédéric Chyzak (self-published), Palaiseau, 2017. Electronic version available from https://hal.archives-ouvertes.fr/AECF.

[9]

A. Bostan, Ph. Flajolet, B. Salvy, and É. Schost. Fast computation of special resultants. J. Symbolic Comput., 41(1):1–29, 2006.

[10]

A. Bostan and É. Schost. Polynomial evaluation and interpolation on special sets of points. J. Complexity, 21(4):420–446, 2005.

[11]

R. P. Brent and H. T. Kung. Fast algorithms for manipulating formal power series. J. ACM, 25(4):581–595, 1978.

[12]

W. D. Brownawell. Bounds for the degrees in the Nullstellensatz. Annal. of Math., 126(3):577–591, 1987.

[13]

P. Bürgisser, M. Clausen, and M. A. Shokrollahi. Algebraic complexity theory, volume 315 of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, 1997.

[14]

J. F. Canny, E. Kaltofen, and L. Yagati. Solving systems of nonlinear polynomial equations faster. In Proceedings of the ACM-SIGSAM 1989 International Symposium on Symbolic and Algebraic Computation, ISSAC '89, pages 121–128, New York, NY, USA, 1989. ACM.

[15]

D. G. Cantor and E. Kaltofen. On fast multiplication of polynomials over arbitrary algebras. Acta Infor., 28:693–701, 1991.

[16]

J.-M. Couveignes and R. Lercier. Fast construction of irreducible polynomials over finite fields. Israel J. Math., 194(1):77–105, 2013.

[17]

C. D'Andrea, A. Ostafe, I. E. Shparlinski, and M. Sombra. Reduction modulo primes of systems of polynomial equations and algebraic dynamical systems. Trans. Amer. Math. Soc., 371(2):1169–1198, 2019.

[18]

C. Durvye and G. Lecerf. A concise proof of the Kronecker polynomial system solver from scratch. Expo. Math., 26(2):101–139, 2008.

[19]

J.-C. Faugère, P. Gaudry, L. Huot, and G. Renault. Sub-cubic change of ordering for Gröbner basis: A probabilistic approach. In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC '14, pages 170–177, New York, NY, USA, 2014. ACM.

[20]

J.-C. Faugère, P. Gianni, D. Lazard, and T. Mora. Efficient computation of zero-dimensional Gröbner bases by change of ordering. J. Symbolic Comput., 16(4):329–344, 1993.

[21]

J. von zur Gathen and J. Gerhard. Modern computer algebra. Cambridge University Press, New York, 3rd edition, 2013.

[22]

N. Giménez and G. Matera. On the bit complexity of polynomial system solving. J. Complexity, 51:20–67, 2019.

[23]

M. Giusti. Some effectivity problems in polynomial ideal theory. In J. Fitch, editor, EUROSAM 84: International Symposium on Symbolic and Algebraic Computation Cambridge, England, July 9–11, 1984, pages 159–171, Berlin, Heidelberg, 1984. Springer Berlin Heidelberg.

[24]

M. Giusti, K. Hägele, J. Heintz, J. L. Montaña, J. E. Morais, and L. M. Pardo. Lower bounds for Diophantine approximations. J. Pure Appl. Algebra, 117/118:277–317, 1997.

[25]

M. Giusti, J. Heintz, J. E. Morais, J. Morgenstern, and L. M. Pardo. Straight-line programs in geometric elimination theory. J. Pure Appl. Algebra, 124(1-3):101–146, 1998.

[26]

M. Giusti, J. Heintz, J. E. Morais, and L. M. Pardo. When polynomial equation systems can be “solved” fast? In Applied algebra, algebraic algorithms and error-correcting codes (Paris, 1995), volume 948 of Lecture Notes in Comput. Sci., pages 205–231. Springer-Verlag, 1995.

[27]

M. Giusti, G. Lecerf, and B. Salvy. A Gröbner free alternative for polynomial system solving. J. complexity, 17(1):154–211, 2001.

[28]

B. Grenet, J. van der Hoeven, and G. Lecerf. Deterministic root finding over finite fields using Graeffe transforms. Appl. Alg. Eng. Comm. Comp., 27(3):237–257, 2016.

[29]

D. Harvey and J. van der Hoeven. Faster polynomial multiplication over finite fields using cyclotomic coefficient rings. J. Complexity, 54:101404, 2019.

[30]

D. Harvey and J. van der Hoeven. Integer multiplication in time . Technical report, HAL, 2019. http://hal.archives-ouvertes.fr/hal-02070778.

[31]

D. Harvey and J. van der Hoeven. Polynomial multiplication over finite fields in time . Technical report, HAL, 2019. http://hal.archives-ouvertes.fr/hal-02070816.

[32]

J. Heintz. Definability and fast quantifier elimination in algebraically closed fields. Theor. Comput. Sci., 24(3):239–277, 1983.

[33]

J. van der Hoeven and G. Lecerf. Modular composition via complex roots. Technical report, CNRS & École polytechnique, 2017. http://hal.archives-ouvertes.fr/hal-01455731.

[34]

J. van der Hoeven and G. Lecerf. Modular composition via factorization. J. Complexity, 48:36–68, 2018.

[35]

J. van der Hoeven and G. Lecerf. Accelerated tower arithmetic. J. Complexity, 55:101402, 2019.

[36]

J. van der Hoeven and G. Lecerf. Fast multivariate multi-point evaluation revisited. J. Complexity, 56:101405, 2020.

[37]

J. van der Hoeven, G. Lecerf, B. Mourrain, et al. Mathemagix, From 2002. http://www.mathemagix.org.

[38]

Xiaohan Huang and V. Y. Pan. Fast rectangular matrix multiplication and applications. J. Complexity, 14(2):257–299, 1998.

[39]

G. Jeronimo and J. Sabia. Effective equidimensional decomposition of affine varieties. J. Pure Appl. Algebra, 169(2–3):229–248, 2002.

[40]

E. Kaltofen and V. Shoup. Fast polynomial factorization over high algebraic extensions of finite fields. In Proceedings of the 1997 International Symposium on Symbolic and Algebraic Computation, ISSAC '97, pages 184–188, New York, NY, USA, 1997. ACM.

[41]

K. S. Kedlaya and C. Umans. Fast modular composition in any characteristic. In FOCS'08: IEEE Conference on Foundations of Computer Science, pages 146–155, Washington, DC, USA, 2008. IEEE Computer Society.

[42]

K. S. Kedlaya and C. Umans. Fast polynomial factorization and modular composition. SIAM J. Comput., 40(6):1767–1802, 2011.

[43]

T. Krick, L. M. Pardo, and M. Sombra. Sharp estimates for the arithmetic Nullstellensatz. Duke Math. J., 109(3):521–598, 2001.

[44]

L. Kronecker. Grundzüge einer arithmetischen Theorie der algebraischen Grössen. J.reine angew. Math., 92:1–122, 1882.

[45]

Y. N. Lakshman. On the complexity of computing a Gröbner basis for the radical of a zero dimensional ideal. In Proceedings of the Twenty-second Annual ACM Symposium on Theory of Computing, STOC '90, pages 555–563, New York, NY, USA, 1990. ACM.

[46]

Y. N. Lakshman. A single exponential bound on the complexity of computing Gröbner bases of zero dimensional ideals. In T. Mora and C. Traverso, editors, Effective Methods in Algebraic Geometry, pages 227–234, Boston, MA, 1991. Birkhäuser Boston.

[47]

Y. N. Lakshman and D. Lazard. On the complexity of zero-dimensional algebraic systems. In T. Mora and C. Traverso, editors, Effective Methods in Algebraic Geometry, pages 217–225, Boston, MA, 1991. Birkhäuser Boston.

[48]

D. Lazard. Gröbner bases, Gaussian elimination and resolution of systems of algebraic equations. In J. A. Hulzen, editor, Computer Algebra: EUROCAL'83, European Computer Algebra Conference London, England, March 28–30, 1983 Proceedings, pages 146–156. Springer Berlin Heidelberg, 1983.

[49]

F. Le Gall. Powers of tensors and fast matrix multiplication. In K. Nabeshima, editor, ISSAC'14: International Symposium on Symbolic and Algebraic Computation, pages 296–303, New York, NY, USA, 2014. ACM.

[50]

G. Lecerf. Computing the equidimensional decomposition of an algebraic closed set by means of lifting fibers. J. Complexity, 19(4):564–596, 2003.

[51]

G. Lecerf. On the complexity of the Lickteig–Roy subresultant algorithm. J. Symbolic Comput., 92:243–268, 2019.

[52]

P. Lelong. Mesure de Mahler et calcul de constantes universelles pour les polynomes de variables. Math. Ann., 299(1):673–695, 1994.

[53]

H. Matsumura. Commutative ring theory, volume 8 of Cambridge Studies in Advanced Mathematics. Cambridge university press, 1989.

[54]

D. McKinnon. An arithmetic analogue of Bezout's theorem. Compos. Math., 126(2):147–155, 2001.

[55]

J. M. McNamee and V. Y. Pan. Numerical Methods for Roots of Polynomials, Part II, volume 16 of Studies in Computational Mathematics. Elsevier, 2013.

[56]

B. Mourrain, V. Y. Pan, and O. Ruatta. Accelerated solution of multivariate polynomial systems of equations. SIAM J. Comput., 32(2):435–454, 2003.

[57]

B. Mourrain and Ph. Trébuchet. Solving projective complete intersection faster. In Proceedings of the 2000 International Symposium on Symbolic and Algebraic Computation, ISSAC '00, pages 234–241, New York, NY, USA, 2000. ACM.

[58]

B. Mourrain and Ph. Trébuchet. Generalized normal forms and polynomial system solving. In Proceedings of the 2005 International Symposium on Symbolic and Algebraic Computation, ISSAC '05, pages 253–260, New York, NY, USA, 2005. ACM.

[59]

B. Mourrain and Ph. Trébuchet. Border basis representation of a general quotient algebra. In Proceedings of the 37th International Symposium on Symbolic and Algebraic Computation, ISSAC '12, pages 265–272, New York, NY, USA, 2012. ACM.

[60]

A. K. Narayanan. Fast computation of isomorphisms between finite fields using elliptic curves. In L. Budaghyan and F. Rodríguez-Henríquez, editors, Arithmetic of Finite Fields. 7th International Workshop, WAIFI 2018, Bergen, Norway, June 14–16, 2018, Revised Selected Papers, volume 11321 of Lecture Notes in Comput. Sci., pages 74–91. Springer, Cham, 2018.

[61]

C. H. Papadimitriou. Computational Complexity. Addison-Wesley, 1994.

[62]

P. Philippon. Sur des hauteurs alternatives. I. Math. Ann., 289(1):255–283, 1991.

[63]

A. Poteaux and É. Schost. On the complexity of computing with zero-dimensional triangular sets. J. Symbolic Comput., 50:110–138, 2013.

[64]

A. Schönhage. Schnelle Berechnung von Kettenbruchentwicklungen. Acta Informatica, 1(2):139–144, 1971.

[65]

A. Schönhage, A. F. W. Grotefeld, and E. Vetter. Fast algorithms: A multitape Turing machine implementation. B. I. Wissenschaftsverlag, Mannheim, 1994.

[66]

J. T. Schwartz. Fast probabilistic algorithms for verification of polynomial identities. J. ACM, 27(4):701–717, 1980.

[67]

V. Shoup. New algorithms for finding irreducible polynomials over finite fields. Math. Comp., 54(189):435–447, 1990.

[68]

P. S. Wang. A p-adic algorithm for univariate partial fractions. In Proceedings of the Fourth ACM Symposium on Symbolic and Algebraic Computation, SYMSAC '81, pages 212–217, New York, NY, USA, 1981. ACM.

[69]

R. Zippel. Probabilistic algorithms for sparse polynomials. In Proceedings EUROSAM' 79, number 72 in Lect. Notes Comput. Sci., pages 216–226. Springer-Verlag, 1979.