On the complexity of polynomial reduction

Joris van der Hoeven

LIX, CNRS

École polytechnique

91128 Palaiseau Cedex

France

Email: vdhoeven@lix.polytechnique.fr

Web: http://lix.polytechnique.fr/~vdhoeven

November 4, 2023

. This work has been supported by the ANR-09-JCJC-0098-01 MaGiX project, the Digiteo 2009-36HD grant and Région Ile-de-France.

In this paper, we present a new algorithm for reducing a multivariate polynomial with respect to an autoreduced tuple of other polynomials. In a suitable sparse complexity model, it is shown that the execution time is essentially the same (up to a logarithmic factor) as the time needed to verify that the result is correct. This is a first step towards making advantage of fast sparse polynomial arithmetic for the computation of Gröbner bases.

Keywords: sparse reduction, complexity, division, Groebner basis, algorithm

A.M.S. subject classification: 68W30, 13P10, 12Y05, 68W40

1.Introduction

Let be a polynomial ring over an effective field with an effective zero test. Given a polynomial , we call the support of .

The naive multiplication of two sparse polynomials requires a priori operations in . This upper bound is sharp if and are very sparse, but pessimistic if and are dense.

Assuming that has characteristic zero, a better algorithm was proposed in [4] (see also [1, 8] for some background). The complexity of this algorithm is expressed in the size of the output. It is shown that and can be multiplied using only operations in , where stands for the complexity of multiplying two univariate polynomials in of degrees . Unfortunately, the algorithm in [4] has two drawbacks:

  1. The algorithm leads to a big growth for the sizes of the coefficients, thereby compromising its bit complexity (which is often worse than the bit complexity of naive multiplication).

  2. It requires to be known beforehand. More precisely, whenever a bound is known, then we really have a multiplication algorithm of complexity .

In practice, the second drawback is of less importance. Indeed, especially when the coefficients in can become large, then the computation of is often cheap with respect to the multiplication itself, even if we compute in a naive way.

Recently, several algorithms were proposed for removing the drawbacks of [4]. First of all, in [18] we proposed a practical algorithm with essentially the same advantages as the original algorithm from [4], but with a good bit complexity and a variant which also works in positive characterisic. However, it still requires a bound for and it only works for special kinds of fields (which nevertheless cover the most important cases such as and finite fields). Even faster algorithms were proposed in [15, 20], but these algorithms only work for special supports. Yet another algorithm was proposed in [13, 19]. This algorithm has none of the drawbacks of [4], but its complexity is suboptimal (although better than the complexity of naive multiplication).

At any rate, these recent developments make it possible to rely on fast sparse polynomial multiplication as a building block, both in theory and in practice. This makes it natural to study other operations on multivariate polynomials with this building block at our disposal. One of the most important such operations is division.

The multivariate analogue of polynomial division is the reduction of a polynomial with respect to an autoreduced tuple of other polynomials. This leads to a relation

(1)

such that none of the terms occurring in can be further reduced with respect to . In this paper, we are interested in the computation of as well as . We will call this the problem of extended reduction, in analogy with the notion of an “extended g.c.d.”.

Now in the univariate context, “relaxed power series” provide a convenient technique for the resolution of implicit equations [12, 13, 14, 16]. One major advantage of this technique is that it tends to respect most sparsity patterns which are present in the input data and in the equations. The main technical tool in this paper (see section 3) is to generalize this technique to the setting of multivariate polynomials, whose terms are ordered according to a specific admissible ordering on the monomials. This will make it possible to rewrite (1) as a so called recursive equation (see section 4.2), which can be solved in a relaxed manner. Roughly speaking, the cost of the extended reduction then reduces to the cost of the relaxed multiplications . Up to a logarithmic overhead, we will show (theorem 6) that this cost is the same as the cost of checking the relation (1).

Our main theorem 6 immediately raises a new question: is it possible to use the new reduction algorithm for speeding up Gröbner basis computations? Indeed, starting with Faugère's algorithm [6], the most efficient implementations for the computation of Gröbner bases currently rely on linear algebra. This is not surprising, since classical implementations of Buchberger's algorithm [2, 3] do not make use of efficient arithmetic on multivariate polynomials anyway. Hence, rewriting these classical algorithms in terms of linear algebra should be at least as efficient, while removing a lot of overhead and potentially taking advantage of fast linear algebra libraries.

Now fast arithmetic for multivariate polynomials potentially changes this picture. In section 5 we will describe a variant of Buchberger's algorithm in the particular case of homogeneous ideals. In this algorithm, we have made all polynomial reductions explicit, thereby making it clear where we may hope for potential gains, although we have not performed any detailed complexity analysis yet. It would also be interesting to study the affine case in detail. One particular interesting question is whether the half g.c.d. algorithm [10] can also be extended using the techniques from section 3. Unfortunately, we fear that the sizes of the required transformation matrices (which are matrices in the half g.c.d.) might become too large for this approach to be efficient.

In order to simplify the exposition, we will adopt a simplified sparse complexity model throughout this paper. In particular, our complexity analysis will not take into account the computation of support bounds for products or results of the extended reduction. Bit complexity issues will also be left aside in this paper.

2.Notations

Let be an effective field with an effective zero test and let be indeterminates. We will denote

for any and . In particular, . For any subset we will denote by the final segment generated by for the partial ordering .

Let be a total ordering on which is compatible with addition. Two particular such orderings are the lexicographical ordering and the reverse lexicographical ordering :

In general, it can be shown [11] that there exist real vectors with , such that

(2)

In what follows, we will assume that and for all . We will also denote

For instance, the graded reverse lexicographical ordering is obtained by taking , , , , .

Given , we define its support by

If , then we also define its leading exponent and coefficient by

Given a finite set , we will denote its cardinality by .

3.Relaxed multiplication

3.1.Relaxed power series

Let us briefly recall the technique of relaxed power series computations, which is explained in more detail in [13]. In this computational model, a univariate power series is regarded as a stream of coefficients . When performing an operation on power series it is required that the coefficient of the result is output as soon as sufficiently many coefficients of the inputs are known, so that the computation of does not depend on the further coefficients. For instance, in the case of a multiplication , we require that is output as soon as and are known. In particular, we may use the naive formula for the computation of .

The additional constraint on the time when coefficients should be output admits the important advantage that the inputs may depend on the output, provided that we add a small delay. For instance, the exponential of a power series may be computed in a relaxed way using the formula

Indeed, when using the naive formula for products, the coefficient is given by

and the right-hand side only depends on the previously computed coefficients . More generally, equations of the form which have this property are called recursive equations and we refer to [17] for a mechanism to transform fairly general implicit equations into recursive equations.

The main drawback of the relaxed approach is that we cannot directly use fast algorithms on polynomials for computations with power series. For instance, assuming that has sufficiently many -th roots of unity and that field operations in can be done in time , two polynomials of degrees can be multiplied in time , using FFT multiplication [5]. Given the truncations and at order of power series , we may thus compute the truncated product in time as well. This is much faster than the naive relaxed multiplication algorithm for the computation of . However, the formula for when using FFT multiplication depends on all input coefficients and , so the fast algorithm is not relaxed (we will say that FFT multiplication is a zealous algorithm). Fortunately, efficient relaxed multiplication algorithms do exist:

Theorem 1. [12, 13] Let be the time complexity for the multiplication of polynomials of degrees in . Then there exists a relaxed multiplication algorithm for series in at order of time complexity .

Remark 2. In fact, the algorithm from theorem 1 generalizes to the case when the multiplication on is replaced by an arbitrary bilinear “multiplication” , where and are effective modules over an effective ring . If denotes the time complexity for multiplying two polynomials and of degrees , then we again obtain a relaxed multiplication for series and at order of time complexity .

Theorem 3. [16] If admits a primitive -th root of unity for all , then there exists a relaxed multiplication algorithm of time complexity . In practice, the existence of a -th root of unity with suffices for multiplication up to order .

3.2.Relaxed Laurent series and polynomials in several variables

Let be an effective ring. A power series is said to be computable if there is an algorithm which takes on input and produces the coefficient on output. We will denote by the set of such series. Then is an effective ring for relaxed addition, subtraction and multiplication.

A computable Laurent series is a formal product with and . The set of such series forms an effective ring for the addition, subtraction and multiplication defined by

If is an effective field with an effective zero test, then we may also define an effective division on , but this operation will not be needed in what follows.

Assume now that is replaced by a finite number of variables . Then an element of

will also be called a “computable lexicographical Laurent series”. Any non zero has a natural valuation , by setting , , etc. The concept of recursive equations naturally generalizes to the multivariate context. For instance, for an infinitesimal Laurent series (that is, , where ), the formula

allows us to compute using a single relaxed multiplication in .

Now take and consider a polynomial . Then we define the Laurent polynomial by

Conversely, given , we define by substituting in . These transformations provide us with a relaxed mechanism to compute with multivariate polynomials in , such that the admissible ordering on is respected. For instance, we may compute the relaxed product of two polynomials by computing the relaxed product and substituting in the result.

3.3.Complexity analysis

Assume now that we are given and a set such that . We assume that is a function such that the (zealous) product can be computed in time . We will also assume that is an increasing function of . In [4, 9], it is shown that we may take .

Let us now study the complexity of sparse relaxed multiplication of and . We will use the classical algorithm for fast univariate relaxed multiplication from [12, 13], of time complexity . We will also consider semi-relaxed multiplication as in [14], where one of the arguments or is completely known in advance and only the other one is computed in a relaxed manner.

Given and , we will denote

We now have the following:

Theorem 4. With the above notations, the relaxed product of and can be computed in time .

Proof. In order to simplify our exposition, we will rather prove the theorem for a semi-relaxed product of (relaxed) and (known in advance). Our proof will be easy to adapt to the case of a full relaxed multiplication. We will prove by induction over that the relaxed product can be computed using at most operations in if is sufficiently large. For , we have nothing to do, so assume that .

Let us first consider the semi-relaxed product of and with respect to . Setting , the computation of this product corresponds (see the right-hand side of figure 1) to the computation of zealous products (i.e. products of polynomials of degrees in ), zealous products, and so on until zealous products. We finally need to perform semi-relaxed products of series in only.

More precisely, assume that and have valuations resp. in and let stand for the coefficient of in . We also define

Now consider a block size . For each , we define

and notice that the are pairwise disjoint. In the semi-relaxed multiplication, we have to compute the zealous products for all . Since

we may compute all these products in time

The total time spent in performing all zealous block multiplications with is therefore bounded by .

Let us next consider the remaining semi-relaxed products. If , then these are really scalar products, whence the remaining work can clearly be performed in time if is sufficiently large. If , then for each , we have

By the induction hypothesis, we may therefore perform this semi-relaxed product in time . A similar argument as above now yields the bound for performing all semi-relaxed block products. The total execution time (which also takes into account the final additions) is therefore bounded by . This completes the induction.

Figure 1. Illustration of a fast relaxed product and a fast semi-relaxed product.

4.Polynomial reduction

4.1.Naive extended reduction

Consider a tuple . We say that is autoreduced if for all and and for all . Given such a tuple and an arbitrary polynomial , we say that is reduced with respect to if for all and . An extended reduction of with respect to is a tuple with

(3)

such that is reduced with respect to . The naive algorithm extended-reduce below computes an extended reduction of .

Algorithm extended-reduce

Input: and an autoreduced tuple

Output: an extended reduction of with respect to

Start with and

While is not reduced with respect to do

Let be minimal and such that for some

Let be maximal with

Set and

Return

Remark 5. Although an extended reduction is usually not unique, the one computed by extended-reduce is uniquely determined by the fact that, in our main loop, we take minimal with for some . This particular extended reduction is also characterized by the fact that

for each .

In order to compute and in a relaxed manner, upper bounds

need to be known beforehand. These upper bounds are easily computed as a function of by the variant supp-extended-reduce of extended-reduce below. We recall from the end of the introduction that we do not take into account the cost of this computation in our complexity analysis. In reality, the execution time of supp-extended-reduce is similar to the one of extended-reduce, except that potentially expensive operations in are replaced by boolean operations of unit cost. We also recall that support bounds can often be obtained by other means for specific problems.

Algorithm supp-extended-reduce

Input: subsets and of as above

Output: subsets and of as above

Start with and

While do

Let be minimal and such that for some

Let be maximal with

Set and

Return

4.2.Relaxed extended reduction

Using the relaxed multiplication from section 3, we are now in a position to replace the algorithm extended-reduce by a new algorithm, which directly computes using the equation (3). In order to do this, we still have to put it in a recursive form which is suitable for relaxed resolution.

Denoting by the -th canonical basis vector of , we first define an operator by

By linearity, this operator extends to

In particular, yields the “leading term” of the extended reduction . We also denote by the corresponding operator from to which sends to .

Now let for each . Then

for each and . The equation

can thus be rewritten as

Using the operator this equation can be rewritten in a more compact form as

The corresponding equation

is recursive, whence the extended reduction can be computed using multivariate relaxed multiplications . With and as in the previous section, theorem 4 therefore implies:

Theorem 6. We may compute the extended reduction of with respect to in time

Remark 7. Following remark 2, we also notice that , the and may be replaced by vectors of polynomials in (regarded as polynomials with coefficients in ), in the case that several polynomials need to be reduced simultaneously.

5.Application to Gröbner basis computations

It is natural to examine whether we can use our relaxed extended reduction algorithm in order to speed up Gröbner basis computations. We will restrict ourself to homogeneous ideals. In this case we may reformulate Buchberger's algorithm in an incremental way which will simplify the discussion and illustrate more clearly the interest of the new algorithms.

In what follows, we will assume that for all . A polynomial will said to be homogeneous, if is constant for all , and we will denote by that constant (if ). Given a set of homogeneous polynomials in and , we will also write (resp. ) for the subset of polynomials of degree (resp. ). Given two polynomials , we will write for the -polynomial of and .

Assume now that we are given a set of homogeneous polynomials and let be the reduced Gröbner basis for . For each degree , we observe that can be computed as a function of only. Indeed, the reduction of a homogeneous polynomial of degree with respect to is again of degree . Similarly, a non zero -polynomial of two homogeneous polynomials of degrees and is of degree .

We may thus compute as a function of by induction over . Given and , all elements of are obtained as reductions of elements in or reductions of -polynomials of degree of elements in . These reductions are either done with respect to (and we will use our relaxed algorithm for this step) or with respect to other elements of degree (in which case we are really doing linear algebra in ).

Apart from unnecessary reductions to zero of -polynomials, the above incremental scheme for the computation of contains no redundant operations. As in optimized versions of Buchberger algorithm, we will therefore assume that a predicate reject-criterion has been implemented for rejecting such critical pairs. More precisely, whenever holds, then we are sure that will reduce to zero. We may for instance use the classical Buchberger criteria [3], or more sophisticated criteria, as used in [7].

The algorithm Gröbner-basis below makes the above informal discussion more precise. We have used the notation for the reduction of a polynomial with respect to an autoreduced set . Here we notice that we may associate a unique autoreduced tuple to an autoreduced set by requiring that . Then just stands for the last entry of .

Algorithm Gröbner-basis

Input: a set of homogeneous polynomials in

Output: the reduced Gröbner basis for

,

While do

Let

Let

Autoreduce

Let

For all pairs and with do

If and then

Set

Return

Remark 8. According to remark 7, the reduction of the elements with respect to can optionally be performed vectorwise.

Remark 9. If the criterion for rejection is sufficiently powerful, then the cardinality of after autoreduction is exactly equal to the cardinality of at the start of the loop. In that case, the execution time of the algorithm is therefore the same as the time needed to verify the reduction steps, up to a logarithmic overhead.

Bibliography

[1]

M. Ben-Or and P. Tiwari. A deterministic algorithm for sparse multivariate polynomial interpolation. In STOC '88: Proceedings of the twentieth annual ACM symposium on Theory of computing, pages 301–309, New York, NY, USA, 1988. ACM.

[2]

B. Buchberger. Ein Algorithmus zum auffinden der Basiselemente des Restklassenringes nach einem null-dimensionalen Polynomideal. PhD thesis, University of Innsbruck, 1965.

[3]

B. Buchberger. Multidimensional Systems Theory, chapter Gröbner bases: an algorithmic method in polynomial ideal theory, pages 184–232. Reidel, 1985. Chapter 6.

[4]

J. Canny, E. Kaltofen, and Y. Lakshman. Solving systems of non-linear polynomial equations faster. In Proc. ISSAC '89, pages 121–128, Portland, Oregon, A.C.M., New York, 1989. ACM Press.

[5]

J.W. Cooley and J.W. Tukey. An algorithm for the machine calculation of complex Fourier series. Math. Computat., 19:297–301, 1965.

[6]

J.-C. Faugère. A new efficient algorithm for computing Gröbner bases (F4). Journal of Pure and Applied Algebra, 139(1–3):61–88, 1999.

[7]

J.-C. Faugère. A new efficient algorithm for computing gröbner bases without reduction to zero (f5). In T. Mora, editor, Proc. ISSAC '02, pages 75–83, Lille, France, July 2002.

[8]

D. Y. Grigoriev and M. Karpinski. The matching problem for bipartite graphs with polynomially bounded permanents is in NC. In Proceedings of the 28th IEEE Symposium on the Foundations of Computer Science, pages 166–172, 1987.

[9]

E. Kaltofen and Y. N. Lakshman. Improved sparse multivariate polynomial interpolation algorithms. In ISSAC '88: Proceedings of the international symposium on Symbolic and algebraic computation, pages 467–474. Springer Verlag, 1988.

[10]

R. Moenck. Fast computation of gcds. In Proc. of the 5th ACM Annual Symposium on Theory of Computing, pages 142–171, New York, 1973. ACM Press.

[11]

Lorenzo Robbiano. Term orderings on the polynominal ring. In European Conference on Computer Algebra (2), pages 513–517, 1985.

[12]

J. van der Hoeven. Lazy multiplication of formal power series. In W. W. Küchlin, editor, Proc. ISSAC '97, pages 17–20, Maui, Hawaii, July 1997.

[13]

J. van der Hoeven. Relax, but don't be too lazy. JSC, 34:479–542, 2002.

[14]

J. van der Hoeven. Relaxed multiplication using the middle product. In Manuel Bronstein, editor, Proc. ISSAC '03, pages 143–147, Philadelphia, USA, August 2003.

[15]

J. van der Hoeven. The truncated Fourier transform and applications. In J. Gutierrez, editor, Proc. ISSAC 2004, pages 290–296, Univ. of Cantabria, Santander, Spain, July 4–7 2004.

[16]

J. van der Hoeven. New algorithms for relaxed multiplication. JSC, 42(8):792–802, 2007.

[17]

J. van der Hoeven. From implicit to recursive equations. Technical report, HAL, 2011. http://hal.archives-ouvertes.fr/hal-00583125/fr/.

[18]

J. van der Hoeven and G. Lecerf. On the bit-complexity of sparse polynomial multiplication. Technical report, HAL, 2010. http://hal.archives-ouvertes.fr/hal-00476223/fr/.

[19]

J. van der Hoeven and G. Lecerf. On the complexity of blockwise polynomial multiplication. Submitted to ISSAC, 2012.

[20]

J. van der Hoeven and É. Schost. Multi-point evaluation in higher dimensions. Technical report, HAL, 2010. http://hal.archives-ouvertes.fr/hal-00477658/fr/.