Efficient accelero-summation of
holonomic functions

Joris van der Hoeven

Dépt. de Mathématiques (bât. 425)

CNRS, Université Paris-Sud

91405 Orsay Cedex

France

Email: vdhoeven@texmacs.org

November 4, 2023



Abstract

Let be a linear differential operator, where is the field of algebraic numbers. A holonomic function over is a solution to the equation . We will also assume that admits initial conditions in at a non-singular point .

Given a broken-line path between and , which avoids the singularities of and with vertices in , we have shown in a previous paper [van der Hoeven, 1999] how to compute digits of the analytic continuation of along in time . In a second paper [van der Hoeven, 2001b], this result was generalized to the case when is allowed to be a regular singularity, in which case we compute the limit of when we approach the singularity along .

In the present paper, we treat the remaining case when the end-point of is an irregular singularity. In fact, we will solve the more general problem to compute “singular transition matrices” between non standard points above a singularity and regular points in near the singularity. These non standard points correspond to the choice of “non-singular directions” in Écalle's accelero-summation process.

We will show that the entries of the singular transition matrices may be approximated up to decimal digits in time . As a consequence, the entries of the Stokes matrices for at each singularity may be approximated with the same time complexity.


1Introduction

Definitions

Let be a subfield of . A holonomic function over is a solution to a linear differential equation , where is a monic linear differential operator of order . Many classical special functions, such as , , , , , hypergeometric functions, Bessel functions, the Airy function, etc. are holonomic. Moreover, the class of holonomic functions is stable under many operations, such as addition, multiplication, differentiation, integration and postcomposition with algebraic functions. In the sequel, and unless stated otherwise, we will assume that is the field of algebraic numbers. We will say that has initial conditions in if for a certain non-singular point .

In this paper, we will be concerned with the efficient multidigit evaluation of limits of holonomic functions at irregular singularities. For this, it will be convenient to introduce some terminology. We say that is effective, if there exists an approximation algorithm, which takes on input and which returns a dyadic approximation with . Inside a computer, an effective complex number is represented as an object with a method which corresponds to its approximation algorithm [van der Hoeven, 2005b]. We denote by the set of effective complex numbers.

The time complexity of is the time complexity of its approximation algorithm, expressed in terms of . If an approximation algorithm has time complexity , then we call it a -approximation algorithm. An effective number is said to be fast, if it admits an approximation algorithm with a time complexity of the form . We denote by the set of such numbers. A partial function is said to be fast if it maps into . For instance, multiplication is fast [Schönhage and Strassen, 1971], since two -bit numbers can be multiplied in time . Implicitly defined functions in terms of fast functions, like division, are also fast, as a result of Newton's method.

Whenever the coefficients of admit singularities, then solutions to are typically multivalued functions on a Riemann surface. From an effective point of view, points on such a Riemann surface may be addressed via broken-line paths starting at the point where we specified the initial conditions for . Each straight-line segment should be sufficiently short, so that the disk with center and radius contains no singularities. Given such a path, we will denote by the evaluation of at the endpoint of , as obtained via analytic continuation.

Previous work

It was first noticed by Brent [Brent, 1976a, Section 6] that the constant admits an efficient -approximation algorithm based on binary splitting. This result was obtained by analogy with Schönhage's fast algorithm for radix conversion. The paper also mentions efficient algorithms for the computation of more general exponentials, although this direction was not investigated in more detail, probably because even more efficient -algorithms were discovered shortly afterwards [Brent, 1976b].

The binary splitting algorithm was generalized to arbitrary holonomic over in [Chudnovsky and Chudnovsky, 1990]. It was shown there that, given a holonomic function over with initial conditions in , and a broken-line path as above with , the number admits an -approximation algorithm. In the case when is a more general effective number with a -approximation algorithm, it was also shown that admits an -approximation algorithm. In particular, the restriction of a holonomic function to an open domain of is fast. By what precedes, this result is extremely interesting for the efficient multidigit evaluation of many special functions. Special cases and a few extensions were rediscovered independently by several authors [Karatsuba, 1991; Karatsuba, 1993; Karatsuba, 1995; Karatsuba, 2000; van der Hoeven, 1997; van der Hoeven, 1999; Haible and Papanikolaou, 1997].

Remark 1.1An early hint to the existence of fast algorithms for the evaluation of holonomic functions occurred in [Gosper and Schroeppel, 1972]. It is plausible that the authors had something like the binary splitting algorithm in mind (the announced complexity is the right one up to a factor ), but no details are provided.

Our first paper [van der Hoeven, 1999] on the subject contained three improvements with respect to [Chudnovsky and Chudnovsky, 1990]. First, we noticed the possibility to work over the algebraic numbers instead of , which allows for the fast evaluation of constants like . Secondly, we improved the above factor of (for the evaluation in arbitrary points) to . Finally, the evaluation of depends on a certain number of bounds, which were assumed to exist empirically in [Chudnovsky and Chudnovsky, 1990]. In [van der Hoeven, 1999], it was shown that all necessary bounds can be computed effectively, as a function of the operator and the path . Stated otherwise, we showed that there exists an algorithm which takes , and the initial conditions for at on input, and which computes (as an object with a -approximation algorithm).

In a second paper [van der Hoeven, 2001b], we continued our studies by showing how to efficiently evaluate the limit of along a broken-line path which ends in a regular singular point . This extension allows for the efficient evaluation of multiple zeta values, Bessel functions (whose initial conditions are specified in a regular singular point) and many other interesting transcendental constants. Some special cases of this more general result were obtained before in [Karatsuba, 1993; Karatsuba, 1995; Haible and Papanikolaou, 1997].

A related problem to the evaluation of at the end-point of a broken line path is the computation of “transition matrices” along . Given a path from to , the “initial conditions” of at depend linearly on the “initial conditions” at . Hence, when considering and as column vectors, there exists a unique scalar matrix with

which is called the transition matrix along for . The relation make transition matrices well-suited for the process of analytic continuation. Therefore, most algorithms from [Chudnovsky and Chudnovsky, 1990; van der Hoeven, 1999] rely on the computation of transition matrices. In [van der Hoeven, 2001b], this concept was further generalized to the case when is allowed to pass through regular singularities.

Main results

In this paper, we will be concerned with the computation of the limits of holonomic functions in irregular singularities and, more generally, with the computation of generalized transition matrices along paths which are allowed to pass through irregular singularities. The algorithms are based on an effective counterpart of the accelero-summation process, as introduced by Écalle [Écalle, 1987; Écalle, 1992; Écalle, 1993; Braaksma, 1991; Borel, 1928; Ramis, 1978]. Since this process is not completely straightforward, let us first motivate its use for our application.

Consider a holonomic function with an irregular singularity at the origin. Assume that admits a (usually divergent) asymptotic expansion in a sector near the origin. Assume also that we have a bound for on . Given , we are interested in computing . Notice that is a holonomic function, so the computation of is a particular instance of the problem of computing the limit of a holonomic function in an irregular singularity.

In order to find with , for a given , it clearly suffices to compute with precision at a point with . This can be done using the analytic continuation algorithm from [Chudnovsky and Chudnovsky, 1990; van der Hoeven, 1999]. However, since the equation may have other solutions with growth rates of the form at , the transition matrix between and may contain entries of size . The computation of digits of may therefore require a time .

The situation gets a bit better, if we want to compute instead of , where we assume that . In that case, using a similar method as above, we may choose with . Consequently, the computation of digits of requires a time , where . Although this already yields a polynomial time algorithm, we are really interested in fast approximation algorithms.

Roughly speaking, the main result of this paper is that the computation of an arbitrary limit of a holonomic function at an irregular singularity may be reduced to the computation of a finite number of other, more special limits. These special limits, which are similar to above, with , will be shown to admit fast -approximation algorithms. More generally, we will generalize the concept of transition matrices, so as to allow for broken-line paths through irregular singularities. In particular, Stokes matrices may be seen as such “singular transition matrices”. We will both show that singular transition matrices may be computed as a function of and a singular broken-line path , and that their entries admit -approximation algorithms.

This result admits several interesting applications besides the computation of limits of holonomic functions in singularities. For instance, we may consider solutions to with a prescribed asymptotic behaviour in one or several singularities and recover the function from these “singular initial conditions” and one or more singular transition matrices. In [van der Hoeven, 2005a], it has also been shown that the possibility to compute the entries of Stokes matrices can be used for the numeric computation of the differential Galois group of . In particular, we obtained an efficient algorithm for factoring .

Our results can be compared to the only previous work on effective resummation that we are aware of [Thomann, 1995]. First of all, the current paper has the advantage that all necessary error bounds for guaranteeing a certain precision are computed automatically. Secondly, the almost linear time complexity is far better than those achieved by other numerical algorithms, like Taylor series expansions (of complexity , at best) or the Runge-Kutta method (of complexity ).

Quick overview

Let us briefly outline the structure of this paper. In section 2, we begin with a survey of the accelero-summation process. The idea is to give a meaning to the evaluation of a divergent formal solution to via a succession of transformations. We first make the formal solution convergent at the origin by applying a formal Borel transform. We next apply a finite number of integral transforms called “accelerations” followed by an a Laplace transform. At the end, we obtain an analytic solution to in a sector near the origin, which admits the divergent formal solution as its asymptotic expansion.

The material in section 3 is more or less classical. We first recall the definition of the Newton polygon of in a singularity, as well as the relationship between its slopes and the shape of formal solutions to . In particular, the steepest slope gives us information about the maximal growth rate of solutions. We next study the Newton polygons of other operators related to , like the operators which annihilate the Borel transforms of solutions to .

In section 4, we recall several stability properties [Stanley, 1980] for holonomic functions and constants, as well as their effective counterparts. In particular, we will show that the integrands involved in the accelero-summation procedure are holonomic and how to compute vanishing operators for them. Using the results from section 3, these operators will be seen to have the required growth rates at infinity.

In sections 5, we show how to compute uniform bounds for the transition matrices in suitable sectors near infinity. In section 6, these bounds will be used for the efficient evaluation of integrals with exponential decrease. In section 7, the different techniques are assembled into an effective and efficient accelero-summation procedure.

None of the algorithms in this paper have been implemented yet. Nevertheless, at least some of the algorithms should be implemented inside the standard library of the upcoming Mathemagix system [van der Hoeven et al., 2002] and any help would be appreciated.

Notations

The following notations will frequently be used in this paper:

Riemann surface of
Subset of , with
Open and closed disks with center and radius
Closed sector at the origin
Closed sector at infinity
Formal Borel transform w.r.t.
Analytic Laplace transform w.r.t. (for minors and majors)
Acceleration operators (for minors and majors)
Multiplicative conjugation of with
Compositional conjugation of with
Compositional conjugation of with
Transition matrix for along

The operators , , , , are defined in sections 2.1 and 2.2. The transformations , and are introduced in sections 3.2 and 4.2.4. Transition matrices are defined in section 4.3.

2Reminders on the accelero-summation process

In this section we survey the process of accelero-summation, give some explicit bounds for the acceleration kernels, as well as the interpretation of the accelero-summation process in terms of “majors”. We have aimed to keep our survey as brief as possible. It is more elegant to develop this theory using resurgent functions and resurgence monomials [Écalle, 1985; Candelberger et al., 1993]. For a more complete treatment, we refer to [Écalle, 1987; Écalle, 1992; Écalle, 1993; Braaksma, 1991; Martinet and Ramis, 1991].

2.1The accelero-summation process

Let be the differential -algebra of infinitesimal Puiseux series in for and consider a formal power series solution to a linear differential equation over . When applicable, the process of accelero-summation enables to associate an analytic meaning to in a sector near the origin of the Riemann surface of , even in the case when is divergent. Schematically speaking, we obtain through a succession of transformations:

(2.1)

Each is a “resurgent function” which realizes in the “convolution model” with respect to the -th “critical time” (with and ). In our case, is an analytic function which admits only a finite number of singularities above . In general, the singularities of a resurgent function are usually located on a finitely generated grid. Let us describe the transformations , and in more detail.

The Borel transform
We start by applying the formal Borel transform to the series . This transformation sends each to

where , and extends by strong linearity:

The result is a formal series in which converges near the origin of the Riemann surface of the logarithm. The formal Borel transform is a morphism of differential algebras which sends multiplication to the convolution product, i.e. , and differentiation to multiplication by . Intuitively speaking, the Borel transform is inverse to the Laplace transform defined below.

Accelerations
Given , the function is defined near the origin of , can be analytically continued on the axis , and admits a growth of the form at infinity. The next function is obtained from by an acceleration of the form

(2.2)

where the acceleration kernel is given by

(2.3)
(2.4)

For large , we will show in section 2.4 below that

for some constants . It follows that the acceleration of is well-defined for small on , where . The set of directions such admits a singularity on is called the set of Stokes directions at the -th critical time. Accelerations are morphisms of differential -algebras which preserve the convolution product. Intuitively speaking, one has , where the Laplace transform is defined below.

The Laplace transform
The last function is defined near the origin of , can be analytically continued on the axis and admits at most exponential growth at infinity. The function is now obtained using the analytic Laplace transform

(2.5)

For any sufficiently small with , the value is well defined. The set of Stokes directions is defined in a similar way as in the case of accelerations. The Laplace transform is a morphism of differential -algebras which is inverse to the Borel transform and sends the convolution product to multiplication.

Given tuples , of critical times in and directions , we say that a formal power series is accelero-summable in the multi-direction if the above scheme yields an analytic function . For any , this function is defined in a sufficiently small sector near of the form . We denote the set of accelero-summable power series of this kind by .

The set forms a differential subring of and the map for is injective. If and are obtained from and by inserting a new critical time and an arbitrary direction, then we have . In particular, contains , where denotes the ring of convergent infinitesimal Puiseux series. Assuming that each is finite modulo , and setting , we also denote , and .

Let be the group of elements with and denote by the ring of all polynomials of the form with . The notion of accelero-summation extends to elements in instead of . Indeed, given , , , we may simply take . It can be checked that this definition is coherent when replacing by for some . By linearity, we thus obtain a natural differential subalgebra of accelero-summable transseries with critical times and in the multi-direction . We also have natural analogues and of and .

2.2Majors and minors

In general, the acceleration and Laplace integrands are both singular at zero and at infinity. Much of the remainder of this paper is directly or indirectly concerned with the efficient integration near infinity. This leaves us with the integration at zero. A classical trick is to replace the integrand by a so called major. This allows us to replace the integral from zero to a point close to zero by a contour integral around zero from to . We will rapidly review this technique and refer to [Écalle, 1985; Candelberger et al., 1993; Écalle, 1992; Écalle, 1993] for details.

Consider an analytic germ near the origin of the Riemann surface of . A major for is an analytic germ with

The minor only depends on the class of modulo the set of regular germs at . We call a microfunction. Given a regular germ , and , the minor

admits the major

for certain polynomials and . More generally, if is locally integrable in a sector containing a point near , then

(2.6)

is a major for . The class of does not depend on the choice of .

Given majors for the from section 2.1, we may now replace (2.2) and (2.5) by

(2.7)
(2.8)

where stands for the contour (see figure 2.1 below) which consists of from to (for some small ), followed by from around to , and from to .

Using the formula (2.6) in combination with (2.7) leads to the direct expression

(2.9)

of in terms of , where

The integrals (2.9) and (2.8) further decompose into

(2.10)
(2.11)

More generally, differentiating times w.r.t. , we obtain the following formulas, on which we will base our effective accelero-summation algorithms:

(2.12)
(2.13)

In section 2.4 below, we will show that for small enough, the kernel and its derivatives in have the same order of decrease at infinity as .

Fig. 2.1. Illustrations of the contours for the acceleration and Laplace integrals. At the left, the contour for the direct integrals (2.2) and (2.5) using minors. In the middle, the contour in the case of majors (2.7) and (2.8). At the right hand side, we use majors for integration at and minors for the integration at infinity, as in (2.10) and (2.11).

2.3Some elementary bounds

Lemma 2.1Given and with , we have

ProofIn the case when , we have

If , then consider the function and its inverse . Given , we obtain

Lemma 2.2Given , , and with , we have

ProofFor , the above lemma implies

The second relation easily follows from the first one by setting .

Lemma 2.3Let . Then

ProofThis follows from the fact that the function admits its minimum at .

Lemma 2.4Given and , we have

ProofBy lemma 2.1, we have

since admits its maximum in . Furthermore,

The second inequality can be checked for small by drawing the graph and it holds for large because of Stirling's formula.

Lemma 2.5Given , , and , we have

ProofApplication of the previous lemma, after a change of variables.

2.4Explicit bounds for the acceleration kernels at infinity

Lemma 2.6Let and . Denote

and assume . Then

ProofLet . We will evaluate the integral (2.4) using the saddle point method. In the neighbourhood of the saddlepoint , we have

For on , we also have

For , it follows that

where the last bound follows from our assumption . We infer that

whence

(2.14)

Now let . We have

(2.15)

since admits a unique maximum at . Furthermore,

for all . Lemma 2.2 therefore implies

(2.16)

since . Putting the relations (2.14), (2.15) and (2.16) together, we obtain

This completes the proof of our lemma.

Lemma 2.7Let and assume that , , and

(2.17)

Then

(2.18)

ProofWe first observe that

For , we also have , so that

(2.19)

Setting , the lemmas 2.6 and 2.2 now imply

because of the assumption (2.17). Combining this bound with (2.19), we obtain (2.18).

3Differential operators and Newton polygons

3.1Definition of the Newton polygon

Let be the set of polynomials of the form with and . If , then we call the valuation of at infinity and the valuation of at zero. If , then . We write or when it is clear from the context whether we are working near or .

Now consider a differential operator

where . The support of is defined to be the set of all pairs with . The Newton polygon (see figure 3.1) of at infinity (resp. zero) is the convex hull of

where (resp. ).

The boundary of the Newton polygon consists of two vertical halflines and a finite number of edges. The outline of (the Newton polygon of) is the sequence of points with , such that the -th edge of the Newton polygon is precisely the segment which joins to . We call

the slope of the -th edge. From the definition of the Newton polygon as a convex hull, it follows that

for all . We call the growth rate of .

Fig. 3.1. Illustration of the Newton polygons at infinity and zero of the operator .

3.2Operations on differential operators

3.2.1Multiplicative conjugation

Given and , we define to be the operator which is obtained by substituting for in . For all , we have

In the case when , we have

In particular, the Newton polygon of and coincide, both at zero and infinity (see figure 3.2). In general, only the slopes which are steeper than the exponent of the dominant monomial of coincide.

Fig. 3.2. Illustration of the Newton polygons at infinity of from figure 3.1 and .

3.2.2Compositional conjugation

Let and consider the transformation . If , then

so the transformation naturally extends to by sending to . We have

Consequently, if

is the outline of , then

is the outline of . In particular, . Of course, if , then we understand that the roles of infinity and zero are interchanged. In figure 3.3, we have illustrated the effect of the transformation on the Newton polygon.

Fig. 3.3. Illustration of the Newton polygons at infinity of from figure 3.1 and .

3.3The Borel transform

Let us now consider the analogue of the formal Borel transform from section 2.1 for differential operators. It is classical that the formal Borel transform satisfies

for . Rewritten in terms of the operators and , this yields

This induces a natural -algebra morphism , by setting

Each term of an operator gives rise to a contribution

to , for suitable constants . In particular,

Let be the outline of at infinity and for all , let

If , then the -th edge gives rise to an edge with slope in the Newton polygon of at zero. If , then it gives rise to an edge with slope in the Newton polygon of at infinity (see figure 3.4). In addition, if contains several terms, then the Newton polygon of at infinity also contains an edge with slope .

Fig. 3.4. The left hand column shows the Newton polygons at infinity of the operators and . At the right hand side, we have drawn the Newton polygons of their Borel transforms and at infinity (the middle column) and at zero (the right hand column).

3.4Formal solutions

Having chosen whether we work near infinity or near the origin, let

Given , the set is called the set of exponential parts of , and the number the growth rate of . More generally given a subvector space of , we denote and .

The Newton polygon provides a lot of information about the structure of the subvector space of formal solutions to . In the sequel, we will use the following classical consequences of the Newton polygon method:

Theorem 3.1

Let be monic, of order and assume that is algebraically closed. Then the equation admits a full basis of solutions in , i.e. . Moreover, each basis element may be chosen so as to have a unique exponential part.

Theorem 3.2Let be the slopes of the Newton polygon of . Then

  1. .

  2. .

4Holonomy

4.1Holonomic functions in several variables

Let be an algebraically closed subfield of . Consider the coordinates and corresponding derivatives w.r.t. . An analytic function in is said to be holonomic over , if it satisfies a non-trivial linear differential equation with for each . Equivalently, we may require that is a finitely generated module over . The second criterion implies the following classical proposition [Stanley, 1980]:

Proposition 4.1Let and be holonomic functions in . Then

  1. Any rational function in is holonomic.

  2. is a holonomic function.

  3. is a holonomic function.

  4. is a holonomic function for all .

  5. Given a point on the Riemann-surface of , the specialization is holonomic.

  6. Given algebraic functions over the composition

    is holonomic.

ProofThe property (c) follows from the inclusion

and the fact that the dimension of the right-hand side is finite over . All other properties are proved in a similar way.

A more interesting closure property is the stability under definite integration. Consider a holonomic function in and a point on its Riemann surface . Let be the Riemann surface of the specialization , where and . Consider a path on with possibly singular end-points. If is singular at , then we assume that there exists a neighbourhood of , such that is a path on for all and . We now have:

Proposition 4.2The integral is a holonomic function.

ProofIt suffices to show that is holonomic in a neighbourhood of . Let be such that

Let and be the specializations of in at the end-point resp. starting point of . Notice that and are defined in a neighbourhood of . Setting , the space

is finite dimensional over . For each and , let

The differential equation for in yields a finite relation

with for all . Partial integration also yields a relation

for every . Combining these relations, we obtain a non-trivial relation

where . For which are not a root of , we thus obtain a recurrence relation for . Therefore, the space

is again finite dimensional over . We conclude our proof with the observation that is stable under .

4.2Computation of vanishing operators

Let us now turn our attention to the one-dimensional case. Given a monic differential operator , we denote by the space of solutions to the equation at a given point. In the case of formal solutions at zero or infinity, we will also write . Inversely, given a vector space of formal series, analytic germs or analytic functions on some domain, we say that vanishes on if . We say that is a vanishing operator for if , in which case is said to be closed.

Given two operators , we know by proposition 4.1 that there exists an operator which vanishes on . It turns out that the operator of minimal order with this property is actually a vanishing operator for . A similar property holds for the operators , and of minimal orders which vanish on , , resp. , where . What is more, there exist algorithms for computing these vanishing operators.

In this section, we will briefly recall these algorithms, and thereby give an effective proof of lemma 4.3 below. The algorithms are all more or less classical, but we could not find a reference where they are all described together. We will also prove a slightly weaker result for the operation (2.6) which associates a major to a minor.

Lemma 4.3Let be monic differential operators in and .

  1. There exists a unique monic with .

  2. There exists a unique monic with .

  3. There exists a unique monic with .

  4. There exists a unique monic with .

4.2.1Addition

We notice that coincides with the least common left multiple of and in the Ore ring . Indeed, any common left multiple vanishes on and any operator which vanishes on resp. right divides resp. . One may thus compute using any classical algorithm for the computation of least common left multiples, such as the Euclidean algorithm.

4.2.2Multiplication

Given formal solutions and to and , the product and its successive derivatives , , etc. may all be reduced using the relations . In other words, , for all , where and denote the orders of resp. . Consequently, there exists a -linear relation among in . By linear algebra, we may compute the monic operator of smallest order with in . Using an adaptation of the proof of [Hendriks and Singer, 1999, Lemma 6.8], we will show that .

Let and be fundamental systems of solutions to resp. at a non-singular point, considered as elements of the field of convergent Laurent series at this point. Let and be formal indeterminates. Then the substitutions

yield an isomorphism

Now consider a monic operator of smaller order than . Using the relations , we may rewrite as a non-zero element of . It follows that . Consequently, there exist constants with . Setting and , we infer that , so is not a vanishing operator of . This shows that is indeed the differential operator of lowest order which vanishes on .

The proof that is closed is based on differential Galois theory [van der Put and Singer, 2003]: when computing the solutions to operators in in a suitable Picard-Vessiot or D-algebraic closure , any differential automorphism of over leaves both and , whence , invariant. But, given a finite dimensionalsubvector space of which is invariant under any differential automorphism, we may explicitly construct an operator with , e.g. [van der Hoeven, 2005a, Proposition 21(b)]. This shows that is closed.

4.2.3Differentiation

If , then is right divisible by , so we must have . Otherwise, the least common multiple of and in has order , so there exist operators of order and of order and with . These operators may again be computed using a modified version of the Euclidean algorithm. Since and , we have .

4.2.4Ramification

In order to compute the operator , it is more convenient to work with the derivation instead of . It is easy to see that this changes the definitions operators , , and only up to a multiple by a power of .

Given a primitive -th root of unity , let be the operator with for all . Then we have for all , whence is a root of if and only if is a root of . By what precedes, it follows that satisfies . Furthermore, implies that for all . Consequently, and we conclude that .

4.2.5Majors

Consider the operation which associates

to . We have

Given a relation for , where has order , we thus obtain a relation

for some polynomial with transcendental coefficients. Setting

(4.1)

it follows that . By theorem 3.2, we notice that the growth rate of at zero or infinity is the same as the growth rate of at zero resp. infinity, since is stable under differentiation and integration without constant term, for each .

4.2.6Applications

Lemma 4.3 admits several useful consequences for what follows.

Corollary 4.4If the coefficients of and are analytic on an open or closed subset of , then the same thing holds for the coefficients of , and .

ProofGiven functions , let denote their Wronskian. If is a basis of the solution space of a monic operator , then we recall that the operator is determined in terms of by the formula

(4.2)

In particular, if are analytic on , then so are the coefficients of , as is seen by expanding the right-hand side of (4.2). It now suffices to apply this observation to , and .

Corollary 4.5Let be monic and . Then

  1. .

  2. .

  3. .

  4. .

ProofThis follows directly from the lemma together with theorem 3.2.

4.3Transition matrices

4.3.1Classical transition matrices

Consider a monic differential operator whose coefficients are analytic function on a Riemann surface . Given a point it is well known that there exists a unique canonical fundamental system

of analytic solutions to at with the property that for all . Since is linear, an arbitrary solution to is uniquely determined by the vector

of its initial conditions at by

(4.3)

More generally, given a path on from to another point , the values of the analytic continuations of along the path also linearly depend on . Consequently, there exists a unique scalar matrix with

(4.4)

We call the transition matrix for along the path . Dually, we have

(4.5)

because of (4.3). Also, if is a second path, then

(4.6)

and in particular

(4.7)

4.3.2Singular transition matrices

The notion of transition matrices can be generalized to allow for paths which pass through regular or irregular singularities of the operator . In order to do this, we start by generalizing the idea of a canonical fundamental system of formal solutions in the singularity .

In the case when the coefficients of are in , then theorem 3.1 tells us that there exists a fundamental system of solutions at . This result is refined in [van der Hoeven, 2001a], where we show how to compute a canonical basis of so called “complex transseries” solutions, which is uniquely characterized by suitable asymptotic properties. In particular,

Notice that there are other definitions of “canonical” systems of solutions [van Hoeij, 1997], which share the property that they can be computed effectively in terms of the operator .

Given a notion of a “canonical system of formal solutions at a singularity ”, we obtain a dual notion of “initial conditions at for arbitrary formal solutions, via the relation (4.3). Now assume in addition that, for a suitable sectorial neighbourhood of , we are able to associate a genuine analytic function to any formal solution at . Then either (4.4) or (4.5) yields a definition for the transition matrix along a straight-line from to . In general, the association depends on one or several parameters, like the non-singular directions in the accelero-summation procedure. We will now show how to encode these parameters in a suitable generalization of a broken-line path.

Assume from now on that . We define a singular broken-line path as being a path , where each is either

Moreover, for each , the open ball with center and radius is assumed to contain no other singularities than . If the are all non singular or regular singular, then we call a regular singular broken-line path.

Now given an irregular singular point , such that for critical times and directions , we define the transition matrix

for any with and such that is sufficiently close to . For regular singular points , a similar definition was given in [van der Hoeven, 2001b].

In view of (4.6) and (4.7), we may extend this definition to arbitrary singular broken-line paths. In particular, it can be checked that the Stokes matrices for are all of the form

Notice that this definition does not depend on the choice of . In a similar way as in [van der Hoeven, 2001b], it is also possible to construct a suitable extension of with “irregular singular points”, in such a way that singular broken-line paths may be lifted to . However, such an extension will not be needed in what follows.

4.3.3Transition matrices for the multivariate case

It is well known that the theory of Gröbner bases generalizes to partial differential operators in the ring . Consider a zero-dimensional system of such operators given by a Gröbner basis . Let be the set of tuples , such that holds for no leading monomial of one of the . We may enumerate , with for a fixed total ordering on the monoid .

Given a non-singular point for , there again exists a unique canonical fundamental system

of analytic solutions to at with the property that for all . Also, an arbitrary solution to is uniquely determined by the vector

of its initial conditions at by . Consequently, the definitions and properties (4.44.7) naturally generalize to the multidimensional paths which avoid the singularities of .

4.4Holonomic constants

Recall that and stand for the open and closed disks of center and radius . A constant in is said to be holonomic over if there exists a linear differential operator and a vector of initial conditions , such that the are defined on and , where is the unique solution to with for . We denote by the set of holonomic constants over .

Proposition 4.6

  1. is a subring of .

  2. Let be a linear differential operator of order in . Then for any non singular broken-line path with end-points in .

  3. Let be a Gröbner basis for a zero-dimensional system of differential operators in . Then for any non singular broken-line path with end-points in , we have .

ProofConsider holonomic constants and , where and are solutions to and with initial conditions in resp. and where the coefficients of and are defined on . By the corollary 4.4, the coefficients of are again defined on and , where is the unique solution with initial conditions for . A similar argument shows the stability of under addition.

As to (b), we first observe that the transition matrix along the straight-line path from to has holonomic entries, provided that the coefficients of are defined on . Indeed, by corollary 4.4, the coefficients of the monic operators with solution spaces are defined on . Using a transformation with and , it follows that has holonomic entries whenever the are defined on the closed disk . Now any broken-line path is homotopic to a broken-line path such that the are defined on the closed disks . From (a), we therefore conclude that has holonomic entries.

As to the last property, we first notice that the function is holonomic in for any fixed and in . In a similar way as above, it follows that the multivariate transition matrix from section 4.3.3 along a straight-line path has entries in for sufficiently close and in . Since any non singular broken-line path is homotopic to the finite composition of straight-line paths of this kind, we conclude by the multivariate analogue of (4.6) and (a).

A number in is said to be a singular holonomic constant over if there exists a linear differential operator and a vector of initial conditions , such that the are defined on and , where is the unique solution to with for . We understand that the limit is taken on the straight-line path from to . If is regular singular at , then we call a regular singular holonomic constant over . We denote by the class of singular holonomic constants over and by the class of regular singular holonomic constants over .

Proposition 4.7

  1. is a subring of .

  2. is a subring of .

  3. Let be a linear differential operator of order in . Then for any regular singular broken-line path as in section 4.3.2.

  4. Let be a linear differential operator of order in . Then for any singular broken-line path as in section 4.3.2.

ProofProperties (a) and (b) are proved in a similar way as above. In view of (4.6), it suffices to prove (c) and (d) in the cases of paths of the form or/and . The first case is treated in a similar way as above, so let us focus on the second case. Without loss of generality we may assume that .

Now, as will be shown in detail in section 7.3, the matrix can be expressed as a product of matrices whose entries are either in , or of the form

(4.8)

or

(4.9)

where , and is holonomic with initial conditions in . Moreover, may be chosen as large as desired. By the results from section 4.2, the integrands are all holonomic, with initial conditions in at . Modulo a suitable change of variables of the form , we may therefore reinterpret (4.8) resp. (4.9) as the limit in of a holonomic function on with initial conditions in at .

We still have to prove that this limit can be rewritten as the limit of a holonomic function on with initial conditions in at . Now given the equation for , let be the fundamental system of solutions for at , so that

Since are in , we have for suitable holonomic functions on with initial conditions in at and regular singularities at . Now

where is a holonomic function on with initial conditions in at the origin.

5Bounds for the transition matrices

5.1Integral formula for transition matrices

Consider a linear differential operator

whose coefficients are analytic functions on an open or closed subset of . We will give an explicit formula for the transition matrix along a path in .

Let us first rewrite the equation as a first order system and give an alternative characterization for the transition matrix. Let

Then the differential equation

(5.1)

admits a unique solution with . Given a path in , it is not hard to see that coincides with the analytic continuation of along .

Given an analytic function on , we will denote by the unique analytic function on given by

Then the system (5.1) with our initial condition admits a natural solution

(5.2)

We will show below that this “integral series” indeed converges when is a straight-line path. In fact, using a similar technique, one can show that the formula is valid in general, but we will not need that in what follows.

5.2Majorants

Let and denote the spaces of continuous -valued resp. -valued functions on . Given matrices and of the same sizes and with coefficients in resp. , we say that is majored by , and we write , if

for all . Given majorations and , we clearly have majorations

(5.3)
(5.4)

Assuming that every point in can be reached by a straight-line path starting at , we also have

(5.5)

where

Assume now that is bounded on . Then there exist constants with

and we may assume without loss of generality that admits pairwise distinct eigenvalues. From the rules (5.3), (5.4) and (5.5), it follows that

The right-hand side of this majoration can be rewritten as , where is the unique solution on to the equation

such that . Now let and be matrices with

where

Then we have

This shows in particular that (5.2) converges when is a straight-line path, since it suffices to replace by a compact convex subset which contains a neighbourhood of .

5.3Bounds for the transition matrices

Given an operator with coefficients in which are bounded at infinity, it is not hard to explicitly compute a sector with on which the have no poles and a majorating matrix with coefficients in . The aperture may chosen as close to as desired. Then the results from the previous section yield:

Theorem 5.1There exists an algorithm which, given an operator

with for all at infinity, computes a sector and constants with

for all straight-line path inside .

More generally, given an operator of growth rate , the operator has growth rate one and we have

for all straight-line paths whose image under is homotopic to the straight-line path . Moreover, after replacing by in and dividing by a suitable power of , we observe that fulfills the conditions of theorem 5.1. We thus obtain:

Theorem 5.2There exists an algorithm which, given an operator

with growth rate at infinity, computes a sector and constants with

for all straight-line path inside .

Remark 5.3In fact, the hypothesis that is a straight-line path is not really necessary in theorem 5.1. With some more work, one may actually consider sectors of at infinity with aperture larger than . In theorem 5.2, this allows you to impose the aperture of to be as large as desired.

6Effective integral transforms

Consider an operator

with growth rate at infinity. Let be a sector of aperture such that does not vanish on and such that we have a bound

(6.1)

for all . Let

so that the ball centered at with radius is just contained in (see figure 6.1), and let be a fixed constant of small bit-size, with .

Fig. 6.1. The sector and the associated constants , , and .

6.1Uniformly fast approximation of transition matrices

Let with and . Assuming that , we may now use the algorithm approx below in order to approximate at precision . The computation of is done using the binary splitting algorithm from [Chudnovsky and Chudnovsky, 1990; van der Hoeven, 1999].

Algorithm approx

Input: as above

Output: a matrix with

if then

Let be minimal with , where

Consider the expansion

Compute at precision

Return

else

Let

Compute

Compute

Return

Theorem 6.1

  1. The algorithm approx is correct.

  2. Let and let be the sum of the bit-sizes of and . Then the running time of the algorithm is uniformly bounded by .

ProofThe correctness of the algorithm in the “single-step case” when follows from (6.1) and Cauchy's formula, since

In the “multi-step case” when , we have

and the result follows by induction.

As to the complexity analysis, let be minimal such that and denote

Then the recursive application of the algorithm gives rise to single-step cases for each with . We have and claim that the precision at which we approximate each satisfies , where .

Indeed, using induction over , this is clear in the case when . In the multi-step case, we have and . Hence, is approximated at precision . The induction hypothesis also implies that each is approximated at precision , where and . We conclude that .

Having proved our claim, let us now estimate the cost of each single-step approximation of at precision . Since , the minimal satisfies

Furthermore, the entries of are -digit numbers, since

and the size of is bounded by . By a similar argument as in the start of section 4.1 of [van der Hoeven, 1999], it follows that the -approximation of is computed in time using binary splitting. Since , the overall running time is therefore bounded by .

6.2Fast approximation of integral transforms

Consider a second differential operator with growth rate at infinity. Let be a solution to with initial conditions in at a point with and . Assume that satisfies a bound

(6.2)

on , where . Our aim is to compute

Now the primitive

satisfies the equation , where the operator has growth rate at infinity. Moreover, admits initial conditions in at .

Assuming that we chose and that the bound (6.1) holds for the transition matrices associated to , we may now use the following simple algorithm for the approximation of .

Algorithm integral_approx

Input:

Output: an approximation for with

Let be the vector of initial conditions for at , so that

Take with such that

Return approx

In the case when , we notice that

for all , so we may take

(6.3)

where is the largest number in below . In the case when , we may use lemma 2.3 to replace the bound (6.2) by a bound of the form

with . Then

and we may take

(6.4)

For both formulas (6.3) and (6.4), we have . Applying theorem 6.1, it follows that

Theorem 6.2

The algorithm integral_approx is correct and its running time is bounded by , where .

7Effective accelero-summation

Let us now show how to put the results from the previous sections together into an accelero-summation algorithm for holonomic functions. Let be a formal solution with initial conditions in at the origin to the equation with . We will first show how to determine the critical times in and the Stokes directions at each critical time. Having fixed , we next detail the effective acceleration-procedure and show how to efficiently evaluate in a sector close to the origin.

7.1Setting up the framework

Normalization
Without loss of generality, we may assume that the valuation of at zero is larger than the degree of in . Indeed, it suffices to replace by and by for a sufficiently large .

Critical times
Let be the non-horizontal slopes of the Newton polygon of at the origin. We take , so the critical times are . For example, in figure 3.4, the critical times are and .

Equations for and
For each critical time , let us show how to compute vanishing operators for and . Let be relatively prime with . Since , we notice that the valuation of in is larger than one.

  1. We first compute and . We may reinterpret as an operator in and notice that .

  2. Let be minimal, such that . We compute the Borel transform . Since , we formally have . In fact, since the accelero-summation process preserves differentially algebraic relations, we will also have .

  3. Compute with using the procedure from section 4.2.5.

Singular directions
For our accelero-summation process to work, it will suffice to avoid the non-zero singularities of the operator at each critical time . In other words, denoting by the order of , we take .

Growth rates of and
Given a critical time , let us now study the growth rates of and at zero and infinity. By corollary 4.5, and with as above, the slopes of the Newton polygon of are . By section 3.3 and formula (4.1), it follows that the non-horizontal slopes of the Newton polygons of and at the origin are

In particular, if , then is regular singular at and [van der Hoeven, 2001b] shows how to compute the values of in the neighbourhood of . We also infer that the non-horizontal slopes of the Newton polygon of and at infinity are

and possibly . In particular, if , then the growth rate of at infinity is . In view of theorem 5.2, we may thus apply to (see below for further details). Also, if , then the growth rate of at infinity is zero or one and theorem 5.2 shows that we are allowed to apply to .

The acceleration kernels
Given a critical time with and , consider the acceleration kernel

The choices of and will be detailed in the next section. In order to compute (2.13), we need an equation for in , of growth rate at infinity. Setting

we have

whence and

By looking at the Newton polygon, we observe that has growth rate at . Now

(7.1)

for a suitable contour . Applying a suitable ramification, followed by and to , we obtain a vanishing operator for , with growth rate at infinity. Although (7.1) is not really a Borel transform (at ), it does satisfy the formal properties of a Borel transform. In other words, is a vanishing operator for with respect to , of growth rate at .

Equations for the integrands
We finally need equations for the integrands of (2.12) and (2.13). If , then we have shown above how to construct a vanishing operator for at infinity. In section 4.2.3, we have also shown how to construct a vanishing operator for each . It follows that and are vanishing operators for the first and second integrands in (2.12). Moreover, the operator has growth rate at infinity, by lemma 4.3. Similarly, and are vanishing operators for the first and second integrands in (2.13), and has growth rate at infinity.

7.2Calibration

Assume now that are fixed non singular directions with . In order to approximate for close to , we first have to precompute a certain number of parameters for the acceleration process, which do not depend on the precision of the desired approximation for . In particular, we will compute a suitable sector near the origin, such that the effective accelero-summation procedure will work for every . Besides , for each critical time , we precompute

Let us show more precisely how to do this.

Computing
If is the smallest non-zero singularity of , then we may take arbitrarily with . By construction, is (at worst) regular singular at , whence so is , as we see from (4.1). From [van der Hoeven, 2001b], it therefore follows that admits an -approximation algorithm for each .

Computing , and
Given , and setting , we use theorem 5.2 to compute a sector and constants , with

for all straight-line paths in . By lemmas 2.7 and 2.3, we may compute a subsector and small and with , such that we have a bound

for all and all . We notice that is regular singular at the origin (for the same reason as above) with initial conditions in . By [van der Hoeven, 2001b], we thus have -approximation algorithms for for any and .

Computing and
By theorem 5.2, we may also compute a sector and constants , with

for all straight-line paths in . Choosing sufficiently small and sufficiently large, we obtain a subsector with

for all , and , with as close to as desired.

7.3Approximation of

For each and , let be the unique solution to with for all . Using the analytic continuation algorithm from [van der Hoeven, 1999], we may efficiently evaluate all derivatives of and its minor at any non-singular point above . For each , we also denote by the unique solution to with for all .

Given and , there now exist -approximation algorithms for the integrals.

Indeed, the first two integrals can be approximated using the algorithm from [van der Hoeven, 1999], applied to the operators and . The last one is computed using the algorithm integral_approx. Notice that the path in the second integral consists of a circular arc composed with a straight-line segment of constant argument. We regard the numbers

as the entries of a matrix

By construction, we thus have

(7.2)

Similarly, if , then there exist -approximation algorithms for

and these numbers again form the entries of a matrix . By construction, we have

(7.3)

Now we already observed that the algorithms from [van der Hoeven, 2001b] provide -approximation algorithms for the entries of the vector

From (7.2) and (7.3), it follows that

and the entries of this vector admit -approximation algorithms.

7.4Main results

Summarizing the results from the previous sections, we have proved:

Theorem 7.1

  1. There exists an algorithm which takes with an irregular singularity at on input and which computes the critical times for , together with the sets of singular directions modulo . In addition, given , with , the algorithm computes a sector with to be used below.

  2. There exists an algorithm which takes the following data on input:

    • , , and as above;

    • A formal solution to (determined by initial conditions in );

    • above , and .

    Setting , the algorithm computes with . Moreover, setting , this computation takes a time .

Corollary 7.2Singular holonomic constants in admit -approximation algorithms.

The theorem 7.1 in particular applies to the fast approximation of singular transition matrices from section 4.3.2. Indeed, let with , and be one of the canonical solutions to at the origin. Then may be accelero-summed by theorem 7.1 and may be evaluated at points above using fast exponentiation and logarithms. We thus obtain:

Corollary 7.3There exists an algorithm which takes the following data on input:

The algorithm computes a matrix with entries in and . Moreover, setting , the algorithm takes a time .

We have summarized the complexities of efficient evaluation algorithms for holonomic functions in table 7.1 below. In the rightmost column, the complexity bound for divergent series follows from corollary 7.3, when composing the transition matrix between zero and a point close to with the non singular transition matrix from to .

series of type evaluation in evaluation in general

Table 7.1. Summary of the complexities of evaluation of different types of holonomic series. We assume that and that the satisfy for certain . For the series in the last row, we assume that “evaluation” is done using an appropriate accelero-summation scheme. For the rightmost column, we do not count the cost of the approximation of the constant itself.

Remark 7.4Although we did not mention the complexity bound for entire series in [van der Hoeven, 1999], the complexity analysis of algorithm C is easily adapted to this case, since

In particular, the computation of exponentials (and logarithms) using this (and Newton's) method is only a factor of less efficient as the best known algorithm based on Landen's transform [Brent, 1976b].

Remark 7.5In [van der Hoeven, 1999], we assumed that is an algebraic number field (i.e. a finite dimensional field extension of ) rather than the field of all algebraic numbers over . Of course, both point of views are equivalent, since given a finite number of algebraic numbers , there exists an algebraic number field with .

It is convenient to work w.r.t. a fixed algebraic number field in order to have an algorithm for fast multiplication. For instance, given a basis of , we may assume without loss of generality that

(7.4)

after multiplication of the by suitable integers. Then we represent elements of as non-simplified fractions , where and . In this representation, and using (7.4), we see that two fractions of size can be multiplied in time .

Remark 7.6In the case when is a subfield of which is not contained in the field of algebraic numbers, the algorithms from this paper and [van der Hoeven, 1999; van der Hoeven, 2001b] still apply, except that the complexity bounds have to be adjusted. Let us make this more precise, by using the idea from [Chudnovsky and Chudnovsky, 1990] for the computation of Taylor series coefficients of holonomic functions. We first observe that the efficient evaluation of holonomic functions essentially boils down to the efficient evaluation of matrix products

where is a matrix with entries in (in the regular singular case, one also has a finite number of exceptional values of for which is explicitly given and with entries in ). Even if , then we may still compute the matrix products

using dichotomy

as polynomials in of degree . This requires a time , when working with a precision of digits. Assuming for simplicity that is a perfect square, and taking , we next use an efficient evaluation algorithm [Moenck and Borodin, 1972; Borodin and Moenck, 1974] for the substitution of in . This requires a time . We finally compute

in time . Assuming that , this yields an algorithm for the -digit evaluation of of complexity . In table 7.1, the complexities in the three different rows should therefore be replaced by , resp. . Indeed, for the first two cases, we have resp. . In the last case, we have the usual overhead. Notice that there is no need to distinguish between the columns.

8Conclusion

This last paper in a series [van der Hoeven, 1999; van der Hoeven, 2001b] on the efficient evaluation of holonomic functions deals with the most difficult case of limit computations in irregular singularities, where the formal solutions are generally divergent. We have not only shown how to compute such limits and so called singular transition matrices in terms of the equation and broken-line paths, but we have also shown that the resulting constants are comprised in the very special class of complex numbers whose digits can be computed extremely fast.

Since it is quite remarkable for a number to belong to , an interesting question is whether there are any other “interesting constants” in which cannot be obtained using the currently available systematic techniques: the resolution of implicit equations using Newton's method and the evaluation of holonomic functions, including their “evaluation” in singular points.

Because of the emphasis in this paper on fast approximation algorithms, we have not yet investigated in detail the most efficient algorithms for obtaining approximations with limited precision. Indeed, given an initial operator of order and degree in , ramification, the Borel transform and the multiplication with the acceleration kernel lead to vanishing operators of far larger (although polynomial) size . If only limited precision is required, one may prefer to use a naive -algorithm for computing the integral transforms, but which avoids the computation of large vanishing operators. In some cases, one may also use summation up to the least term, as sketched in the appendix below.

In this paper, we have restricted ourselves to the very special context of holonomic functions, even though Écalle's accelero-summation process has a far larger scope. Of course, the results in our paper are easily generalized to the case of more general algebraically closed subfields of , except that we only get -approximation algorithms. Nevertheless, following [Écalle, 1987; Braaksma, 1991; Braaksma, 1992], it should also be possible to give algorithms for the accelero-summation of solutions to non-linear differential equations.

Appendix ASummation up to the least term

Let and let be a solution to with a formal power series expansion . It is well known [Poincaré, 1886] that the truncated sum

up to the term for which is minimal usually provides an exponentially good approximation for . Even though such truncations do not allow for the computation of an arbitrarily good approximation of the value for fixed , it is well adapted to the situation in which only a limited precision is required. Indeed, in order to compute , we may directly apply the binary splitting algorithm from [Chudnovsky and Chudnovsky, 1990; van der Hoeven, 1999].

In this appendix, we will sketch how summation up to the least term can be made more precise using the accelero-summation process. We start from a formal solution to . Given , we define

Our aim is to compute an explicit bound for for a suitable non singular multi-direction . Modulo a change of variables , we may take .

Consider a critical time . If , then is convergent at the origin, so we may compute a bound of the form

(A.1)

on an interval at the origin, using [van der Hoeven, 2001b]. For , we assume by induction that we have a bound

(A.2)

on a sector at the origin and for sufficiently large . Using [van der Hoeven, 2001b] a second time, we may also compute bounds for the coefficients of as a polynomial in . At each critical time , this leads to further bounds

(A.3)

for .

Assuming that , we now have

We may further decompose

(A.4)

if and similarly with if .

By lemmas 2.6 and 2.5, we may compute , , , and with

for and . Using (A.3), we thus get

(A.5)

Using the techniques from section 7, we may also compute a bound

for . Using lemma 2.6 and (A.5), we may thus compute , , and with

(A.6)

for and . Combining (A.4) and (A.6), we may therefore compute and such that (A.2) recursively holds at stage .

In the case when , similar computations yield constants , , , and a small sector with aperture , such that

(A.7)

for all and all . The optimal for which this bound is minimal satisfies

We thus have

for some explicitly computable . This completes the proof of the following:

Theorem A.1There exists an algorithm which takes on input

and which computes and , such that the bound

holds for any and . In particular, for some computable constant and precisions with

(A.8)

we may compute an -approximation of for in time , where the complexity bound is uniform in .

Acknowledgment
The author would like to thank the first anonymous referee for several helpful comments and corrections.

References

[Borel, 1928]

Borel, E., 1928. Leçons sur les séries divergentes, 2nd Edition. Gauthiers Villards, reprinted by Jacques Gabay.

[Borodin and Moenck, 1974]

Borodin, A., Moenck, R., 1974. Fast modular transforms. Journal of Computer and System Sciences 8, 366–386.

[Braaksma, 1991]

Braaksma, B., 1991. Multisummability and Stokes multipliers of linear meromorphic differential equations. J. Diff. Eq 92, 45–75.

[Braaksma, 1992]

Braaksma, B., 1992. Multisummability of formal power series solutions to nonlinear meromorphic differential equations. Ann. Inst. Fourier de Grenoble 42, 517–540.

[Brent, 1976a]

Brent, R., 1976a. The complexity of multiprecision arithmetic. In: Complexity of Computational problem solving.

[Brent, 1976b]

Brent, R., 1976b. Fast multiple-precision evaluation of elementary functions. Journal of the ACM 23 (2), 242–251.

[Candelberger, Nosmas, and Pham, 1993]

Candelberger, B., Nosmas, J., Pham, F., 1993. Approche de la résurgence. Actualités mathématiques. Hermann.

[Chudnovsky and Chudnovsky, 1990]

Chudnovsky, D., Chudnovsky, G., 1990. Computer algebra in the service of mathematical physics and number theory (computers in mathematics, stanford, ca, 1986). In: Lect. Notes in Pure and Applied Math. Vol. 125. Dekker, New-York, pp. 109–232.

[Écalle, 1985]

Écalle, J., 1985. Les fonctions résurgentes I–III. Publ. Math. d'Orsay 1981 and 1985.

[Écalle, 1987]

Écalle, J., 1987. L'accélération des fonctions résurgentes (survol), unpublished manuscript.

[Écalle, 1992]

Écalle, J., 1992. Introduction aux fonctions analysables et preuve constructive de la conjecture de Dulac. Hermann, collection: Actualités mathématiques.

[Écalle, 1993]

Écalle, J., 1993. Six lectures on transseries, analysable functions and the constructive proof of Dulac's conjecture. In: Schlomiuk, D. (Ed.), Bifurcations and periodic orbits of vector fields. Kluwer, pp. 75–184.

[Gosper and Schroeppel, 1972]

Gosper, R., Schroeppel, R., February 1972. Artificial intelligence memo. Tech. Rep. 239, MIT AI Laboratory, item 178 on the evaluation of functions, see http://home.pipeline.com/~hbaker1/hakmem/hakmem.html.

[Haible and Papanikolaou, 1997]

Haible, B., Papanikolaou, T., 1997. Fast multiple-precision evaluation of elementary functions. Tech. Rep. TI-7/97, Universität Darmstadt.

[Hendriks and Singer, 1999]

Hendriks, P., Singer, M., 1999. Solving difference equations in finite terms. JSC 27 (3), 239–259.

[Karatsuba, 1991]

Karatsuba, E., 1991. Fast evaluation of transcendental functions. Problems of Information Transmission 27, 339–360.

[Karatsuba, 1993]

Karatsuba, E., 1993. Fast evaluation of Bessel functions. Integral Transforms and Special Functions 1 (4), 269–276.

[Karatsuba, 1995]

Karatsuba, E., 1995. Fast calculation of the Riemann zeta function for integer values of the argument . Problems of Information Transmission 31, 353–362.

[Karatsuba, 2000]

Karatsuba, E., 2000. On the computation of the Euler constant . J. of Numerical Algorithms 24, 83–97.

[Martinet and Ramis, 1991]

Martinet, J., Ramis, J.-P., 1991. Elementary acceleration and multisummability. Ann. Inst. Henri Poincaré 54 (4), 331–401.

[Mitschi, 1996]

Mitschi, C., 1996. Differential Galois groups of confluent generalized hypergeometric equations: an approach using stokes multipliers. Pac. J. Math. 176 (2), 365–405.

[Moenck and Borodin, 1972]

Moenck, R., Borodin, A., 1972. Fast modular transforms via division. In: Thirteenth annual IEEE symposium on switching and automata theory. Univ. Maryland, College Park, Md., pp. 90–96.

[Poincaré, 1886]

Poincaré, H., 1886. Sur les intégrales irrégulières des équations linéaires. Acta Math. 8, 295–344.

[Ramis, 1978]

Ramis, J.-P., 1978. Dévissage gevrey. Astérisque 59/60, 173–204.

[Schönhage and Strassen, 1971]

Schönhage, A., Strassen, V., 1971. Schnelle Multiplikation grosser Zahlen. Computing 7 7, 281–292.

[Stanley, 1980]

Stanley, R., 1980. Differentially finite power series. European J. Combin. 1, 175–188, mR # 81m:05012.

[Thomann, 1995]

Thomann, J., 1995. Procédés formels et numériques de sommation de séries d'équations différentielles. Expositiones Math. 13, 223–246.

[van der Hoeven, 1997]

van der Hoeven, J., 1997. Automatic asymptotics. Ph.D. thesis, École polytechnique, France.

[van der Hoeven, 1999]

van der Hoeven, J., 1999. Fast evaluation of holonomic functions. TCS 210, 199–215.

[van der Hoeven, 2001a]

van der Hoeven, J., 2001a. Complex transseries solutions to algebraic differential equations. Tech. Rep. 2001-34, Univ. d'Orsay.

[van der Hoeven, 2001b]

van der Hoeven, J., 2001b. Fast evaluation of holonomic functions near and in singularities. JSC 31, 717–743.

[van der Hoeven, 2005a]

van der Hoeven, J., 2005a. Around the numeric-symbolic computation of differential Galois groups. Tech. Rep. 2005-4, Université Paris-Sud, Orsay, France, to appear in JSC.

[van der Hoeven, 2005b]

van der Hoeven, J., 2005b. Effective complex analysis. JSC 39, 433–449.

[van der Hoeven et al., 2002]

van der Hoeven et al., J., 2002. Mmxlib: the standard library for Mathemagix. http://www.mathemagix.org/mml.html.

[van der Put and Singer, 2003]

van der Put, M., Singer, M., 2003. Galois Theory of Linear Differential Equations. Vol. 328 of Grundlehren der mathematischen Wissenschaften. Springer.

[van Hoeij, 1997]

van Hoeij, M., 1997. Formal solutions and factorization of differential operators with power series coefficients. JSC 24, 1–30.