.

This work has been supported by the ANR-09-JCJC-0098-01 MaGiX project, as well as a Digiteo 2009-36HD grant and Région Ile-de-France.

Efficient root counting

for analytic functions on a disk

Joris van der Hoeven

LIX, CNRS

École polytechnique

91128 Palaiseau Cedex

France

Email: vdhoeven@lix.polytechnique.fr

Web: http://lix.polytechnique.fr/~vdhoeven

January 5, 2015

In this note, we present a variant of an algorithm by Schönhage for counting the number of zeros of a complex polynomial in a disk. Our algorithm implements a few optimizations and also applies to more general analytic functions.

Keywords: Root counting, reliable computation, Graeffe method

A.M.S. subject classification: 68W25, 68W30, 65G20, 30B10, 42-04

1.Introduction

Many algorithms have been proposed for the reliable computation of zeros of complex polynomials [Sch82, Gou96, KvB00, Pan02], assuming multiple precision arithmetic. A slightly less ambitious problem is to count the number of zeros in a given disk; see also [Rum10, Sections 13.2 and 13.3]. In this paper, we present a variant of Schönhage's method [Sch82], based on iterated Graeffe transforms, but with a few advantages.

We present our algorithm in the setting of ball arithmetic [vdH09], which is our preferred variant of interval arithmetic [Moo66, AH83, MKC09, Rum10]. In this framework, a large part of the burden of bound computations is taken away from our shoulders and moved to the underlying arithmetic. Moreover, the algorithm naturally applies for analytic functions, which are represented by an approximating polynomial and a bound for the error. Finally, during the iterated application of Graeffe transforms, some coefficients of the polynomial become very small. Ball arithmetic allows us to move such coefficients to a global error term and reduce the degree of the polynomial, thereby speeding up the overall algorithm.

Our algorithm is presented for analytic functions whose power series expansion is given explicitly up to a given order, together with an error bound for the tail. In practice, we are often given a program which may compute the expansion (and tail bound) up to any required order. If we are expecting a -fold multiple root, then it is a good practice to compute the expansion up to order for some : this is usually only a constant times more expensive than the computation of an expansion up to order , but greatly improves the quality of the error bounds and the probability that our root counting algorithm will be successful.

2.Ball arithmetic

2.1.Basic principles

Let us briefly recall the principles behind ball arithmetic, while referring to [vdH09] for details. Given a normed vector space , we will denote by or the set of closed balls with centers in and radii in . Given such a ball , we will denote its center by and its radius by . Conversely, given and , we will denote by the closed ball with center and radius .

A continuous operation is said to lift into an operation on balls, which is usually also denoted by , if the inclusion property

is satisfied for any and . For instance, if is a Banach algebra, then we may take

Similar formulas can be given for division and elementary functions.

2.2.Floating point arithmetic

In concrete machine computations, numbers are usually approximated by floating point numbers with a finite precision. Let be the set of floating point numbers at a working precision of bits. It is customary to include the infinities in as well. The IEEE754 standard [ANS08] specifies how to perform basic arithmetic with floating point numbers in a predictable way, by specifying a rounding mode among “down”, “up” and “nearest”. A multiple precision implementation of this standard is available in the Mpfr library [HLRZ00]. Given an operation , we will denote by its approximation using floating pointing arithmetic with rounding mode . This notation extends to the case when and are replaced by their complexifications and .

Setting , we will denote by or the set of closed balls in with centers in and radii in . In this case, we will also allow for balls with an infinite radius. A continuous operation is again said to lift to an operation on balls if (1) holds for any and . The formulas for the ring operations may now be adapted to

where , and are reliable bounds for the rounding errors induced by the corresponding floating point operations on the centers; see [vdH09] for more details. Given , it will be convenient to denote by and certified lower and upper bounds for .

2.3.Balls of polynomials and analytic functions

Let denote the closed unit disk and the closed unit circle. Let and denote the rings of analytic functions on resp. . We define norms on and by

By the maximum modulus principle, the inclusion preserves norms. Since our criterion for root counting will be based on a variant of Rouché's theorem, we will mainly consider analytic functions on in what follows. In order to avoid confusion with , we will denote by the closed ball of radius in :

In view of section 2.1, we may then consider the space of balls with centers in and radii in . Any such ball can be written for and . For concrete machine computations, and in a similar way as in section 2.2, we may also consider the space of balls with centers in and radii in . Given , we will denote by a certified upper bound for on . If , then we may for instance take .

2.4.Simplification

Consider a ball with and let be the maximum of the exponents of the coefficients of . Let be maximal such that

and take if . Denote . Truncation of the mantissas of the coefficients of leads to a decomposition

where

We define the simplification of by

and notice that .

2.5.Efficient multiplication

Assume now that we want to multiply two balls and in an efficient way. Modulo simplification, we may assume without loss of generality that and . For some common , we thus have

We may multiply the integer polynomials and using any fast classical algorithm, such as Kronecker substitution [PB94, GG02]. When truncating back to a complex floating polynomial , we have

Consequently, we may take

3.The algorithm

Given an analytic function with no zeros on , we will denote

If , then counts the number of zeros of in the open unit disk. Now consider a ball of analytic functions. Whenever no admits a zero on , then does not depend on the choice of . The aim of this paper is to compute in this case. If some admit zeros which are too close to , then the algorithm is allowed to fail.

3.1.Rouché's theorem

The method that we will use for certifying the number of roots relies on the following variant of Rouché's theorem.

Theorem 1. Let and be two analytic functions on , such that

Then .

Proof. We will use a similar argument as in [Lan76, page 158]. Let be the path on which turns one time around the origin and let . By our assumption, the path is contained in the open disk with center one and radius one. Since this disk does not contain the origin, we have [Lan76, Lemma 3, page 116]

But

whence .

Given a polynomial and , let us denote by the polynomial .

Proposition 2. Consider a ball and let be the index for which is maximal. If

then .

Proof. Let with . Setting , our assumption implies

for all . By theorem 1, we conclude that .

3.2.Graeffe transforms

The second ingredient for our algorithm is the Graeffe transform, which we will use for analytic functions. Given , we define its Graeffe transform by

If is actually a polynomial, then we notice that has the same degree as . The fundamental property of Graeffe transforms is:

Proposition 3. On a small annulus containing , the roots of are precisely the squares of the roots of , when counting with multiplicities.

Proof. By continuity under small deformations, it suffices to consider the case when has only simple zeros near . Now

for all near .

Let us show that the Graeffe transform can be lifted to an operation which satisfies the inclusion property

for all and . If , then we may directly use the formula , assuming that the squares are computed using the algorithm for ball multiplication in section 2.5. In general, we take

and claim that this definition satisfies the inclusion property. Indeed, given , there exists an with . Hence,

Since and , we thus have

This proves our claim.

3.3.Root counting

Putting together the various pieces, we now have the following algorithm for root counting. Optionally, one may add an extra integer parameter in order to limit the number of recursive calls to, say, .

Algorithm root-count

Input:

Output: or failure

Let and write

If , then abort

Let and

Let be such that is maximal

If , then abort

If , then return

Return root-count

One nice property of this algorithm is that the degree of often quickly decreases during successive recursive calls. More precisely, let be the zeros of , ordered such that . Then

In the right hand size, all coefficients which smaller than are discarded via simplification. Roughly speaking, for , the -th coefficient therefore only survives as long as .

3.4.Alternative termination strategies

In our algorithm, satisfaction of the condition enabled us to produce a final certified root count. It may be possible to design quick certified root counts for other favourable cases. Any alternative root counting strategy may actually be tried before applying our algorithm, or even at every stage as a replacement of the default strategy based on the condition .

One natural alternative root counting strategy is based on the evaluation of at many points on . Let to be the smallest power of two which is larger than and take . Then we may efficiently evaluate at using the FFT. Furthermore, the distance between two successive powers of is bounded by . Now assume that

for all . Then it is guaranteed that admits no zeros on . Hence,

When multiplying the number of points by a constant , we may replace by in the denominator of (2). We may also consider the second order condition

instead of (2), which requires the additional evaluation of at the powers . The conditions (2) and (3) have a reasonable chance of being satisfied if the closest root of with respect to is at distance at least .

Bibliography

[AH83] G. Alefeld and J. Herzberger. Introduction to interval analysis. Academic Press, New York, 1983.

[ANS08] ANSI/IEEE. IEEE standard for binary floating-point arithmetic. Technical report, ANSI/IEEE, New York, 2008. ANSI-IEEE Standard 754-2008. Revision of IEEE 754-1985, approved on June 12, 2008 by IEEE Standards Board.

[GG02] J. von zur Gathen and J. Gerhard. Modern Computer Algebra. Cambridge University Press, 2-nd edition, 2002.

[Gou96] X. Gourdon. Combinatoire, algorithmique et géométrie des polynômes. PhD thesis, École polytechnique, 1996.

[HLRZ00] G. Hanrot, V. Lefèvre, K. Ryde, and P. Zimmermann. MPFR, a C library for multiple-precision floating-point computations with exact rounding. http://www.mpfr.org, 2000.

[KvB00] P. Kravanja and M. van Barel. Computing the zeros of analytic functions, volume 1727 of Lecture Notes in Mathematics. Springer-Verlag, 2000.

[Lan76] S. Lang. Complex analysis. Addison-Wesley, 1976.

[MKC09] R.E. Moore, R.B. Kearfott, and M.J. Cloud. Introduction to Interval Analysis. SIAM Press, 2009.

[Moo66] R.E. Moore. Interval Analysis. Prentice Hall, Englewood Cliffs, N.J., 1966.

[Pan02] V. Y. Pan. Univariate polynomials: nearly optimal algorithms for numerical factorization and root-finding. J. Symbolic Comput., 33(5):701–733, 2002.

[PB94] V. Pan and D. Bini. Polynomial and matrix computations. Birkhauser, 1994.

[Rum10] S.M. Rump. Verification methods: Rigorous results using floating-point arithmetic. Acta Numerica, 19:287–449, 2010.

[Sch82] A. Schönhage. The fundamental theorem of algebra in terms of computational complexity. Technical report, Math. Inst. Univ. of Tübingen, 1982.

[vdH09] J. van der Hoeven. Ball arithmetic. Technical report, HAL, 2009. http://hal.archives-ouvertes.fr/hal-00432152/fr/.