> <\body> <\hide-preamble> >>>> >>> >>> \; \ holonomic functions>|||<\author-address> de Mathématiques ( 425) CNRS, Université Paris-Sud 91405 Orsay Cedex France |>|>> <\abstract> Let \(z)[\]> be a linear differential operator, where > is the field of algebraic numbers. Aholonomic function over > is a solution to the equation . We will also assume that admits initial conditions in > at a non-singular point \>. Given a broken-line path =z\z> between and >, which avoids the singularities of and with vertices in >, we have shown in a previous paper how to compute digits of the analytic continuation of along > in time n*log log n)>. In a second paper , this result was generalized to the case when > is allowed to be a regular singularity, in which case we compute the limit of when we approach the singularity along >. In the present paper, we treat the remaining case when the end-point of > is an irregular singularity. In fact, we will solve the more general problem to compute ``singular transition matrices'' between non standard points above a singularity and regular points in > near the singularity. These non standard points correspond to the choice of ``non-singular directions'' in Écalle's accelero-summation process. We will show that the entries of the singular transition matrices may be approximated up to decimal digits in time n*log log n)>. As a consequence, the entries of the Stokes matrices for at each singularity may be approximated with the same time complexity. Let > be a subfield of >. A over > is a solution to a linear differential equation , where +L*\+\+L\\(z)[\]> is a monic linear differential operator of order. Many classical special functions, such as , , , , , hypergeometric functions, Bessel functions, the Airy function, are holonomic. Moreover, the class of holonomic functions is stable under many operations, such as addition, multiplication, differentiation, integration and postcomposition with algebraic functions. In the sequel, and unless stated otherwise, we will assume that > is the field of algebraic numbers. We will say that has >> if ,f(z))\\> for a certain non-singular point \>. In this paper, we will be concerned with the efficient multidigit evaluation of limits of holonomic functions at irregular singularities. For this, it will be convenient to introduce some terminology. We say that \> is , if there exists an , which takes \\>> on input and which returns a dyadic approximation \(\+\*\)*2>> with -z\|\\>. Inside a computer, an effective complex number is represented as an object with amethod which corresponds to its approximation algorithm . We denote by> the set of effective complex numbers. The of \> is the time complexity of its approximation algorithm, expressed in terms of log \>. If an approximation algorithm has time complexity , then we call it a -approximation algorithm. An effective number is said to be , if it admits an approximation algorithm with a time complexity of the form n)>. We denote by > the set of such numbers. Apartial function )\\> is said to be if it maps )> into >. For instance, multiplication is fast , since two -bit numbers can be multiplied in time . Implicitly defined functions in terms of fast functions, like division, are also fast, as a result of Newton's method. Whenever the coefficients of admit singularities, then solutions to are typically multivalued functions on a Riemann surface. From an effective point of view, points on such a Riemann surface may be addressed via =z\z=z\z\\\z> starting at the point > where we specified the initial conditions for. Each \z> should be sufficiently short, so that the disk with center > and radius -z\|> contains no singularities. Given such a path, we will denote by )> the evaluation of at the endpoint > of>, as obtained via analytic continuation. It was first noticed by Brent 6> that the constant > admits an efficient -approximation algorithm based on binary splitting. This result was obtained by analogy with Schönhage's fast algorithm for radix conversion. The paper also mentions efficient algorithms for the computation of more general exponentials, although this direction was not investigated in more detail, probably because even more efficient -algorithms were discovered shortly afterwards. The binary splitting algorithm was generalized to arbitrary holonomic over > in. It was shown there that, given a holonomic function over > with initial conditions in>, and a broken-line path=z\z> as above with \\>, the number )> admits an n)>-approximation algorithm. In the case when > is a more general effective number with a -approximation algorithm, it was also shown that )> admits an n)>-approximation algorithm. In particular, the restriction of a holonomic function to an open domain of > is fast. By what precedes, this result is extremely interesting for the efficient multidigit evaluation of many special functions. Special cases and a few extensions were rediscovered independently by several authors. <\remark> An early hint to the existence of fast algorithms for the evaluation of holonomic functions occurred in . It is plausible that the authors had something like the binary splitting algorithm in mind (the announced complexity is the right one up to a factor ), but no details are provided. Our first paper on the subject contained three improvements with respect to. First, we noticed the possibility to work over the algebraic numbers> instead of >, which allows for the fast evaluation of constants like()>. Secondly, we improved the above factor of n> (for the evaluation in arbitrary points) to n*log log n>. Finally, the evaluation of )> depends on a certain number of bounds, which were assumed to exist empirically in . In , it was shown that all necessary bounds can be computed effectively, as a function of the operator and the path >. Stated otherwise, we showed that there exists an algorithm which takes , > and the initial conditions for at on input, and which )> (as an object with a n)>-approximation algorithm). In a second paper , we continued our studies by showing how to efficiently evaluate the limit of along a broken-line path > which ends in a regular singular point>. This extension allows for the efficient evaluation of multiple zeta values, Bessel functions (whose initial conditions are specified in a regular singular point) and many other interesting transcendental constants. Some special cases of this more general result were obtained before in . A related problem to the evaluation of at the end-point of a broken line path> is the computation of ``transition matrices'' along >. Given a path =z\z> from to >, the ``initial conditions'' )=(f(z),\,f(z))> of at > depend linearly on the ``initial conditions'' ,f(z))> at. Hence, when considering and)> as column vectors, there exists a unique scalar matrix z>=\z>> with <\equation*> F(z)=\z>*F(z), which is called the along > for . The relation z>=\\z>*\z>> make transition matrices well-suited for the process of analytic continuation. Therefore, most algorithms from rely on the computation of transition matrices. In, this concept was further generalized to the case when> is allowed to pass through regular singularities. In this paper, we will be concerned with the computation of the limits of holonomic functions in irregular singularities and, more generally, with the computation of generalized transition matrices along paths which are allowed to pass through irregular singularities. The algorithms are based on an effective counterpart of the accelero-summation process, as introduced by Écalle . Since this process is not completely straightforward, let us first motivate its use for our application. Consider a holonomic function with an irregular singularity at the origin. Assume that admits a (usually divergent) asymptotic expansion +f*z+\\\[[z]]> in a sector > near the origin. Assume also that we have a bound for on >. Given \\\\>, we are interested in computing >f(t)*\ t>. Notice that (z)=>f(t)*\ t> is a holonomic function, so the computation of is a particular instance of the problem of computing the limit of a holonomic function in an irregular singularity. In order to find \(\+\*\)*2>> with -I\|\\>, for a given \\>>, it clearly suffices to compute (z)> with precision /2> at a point > with \|\\/(2*B)>. This can be done using the analytic continuation algorithm from . However, since the equation may have other solutions with growth rates of the form >)> at , the transition matrix between > and > may contain entries of size )>)>>. The computation of log \)> digits of > may therefore require a time >. The situation gets a bit better, if we want to compute >f(t)*\*\ t> instead of, where we assume that \\>>. In that case, using a similar method as above, we may choose \\>> with =O(\log \)>. Consequently, the computation of log \)> digits of requires a time >*log n)>, where \1>. Although this already yields a polynomial time algorithm, we are really interested in fast approximation algorithms. Roughly speaking, the main result of this paper is that the computation of an arbitrary limit of a holonomic function at an irregular singularity may be reduced to the computation of a finite number of other, more special limits. These special limits, which are similar to above, with =1>, will be shown to admit fast n)>-approximation algorithms. More generally, we will generalize the concept of transition matrices, so as to allow for broken-line paths through irregular singularities. In particular, Stokes matrices may be seen as such ``singular transition matrices''. We will both show that singular transition matrices may be computed as a function of and a singular broken-line path>, and that their entries admit n)>-approximation algorithms. This result admits several interesting applications besides the computation of limits of holonomic functions in singularities. For instance, we may consider solutions to with a prescribed asymptotic behaviour in one or several singularities and recover the function from these ``singular initial conditions'' and one or more singular transition matrices. In, it has also been shown that the possibility to compute the entries of Stokes matrices can be used for the numeric computation of the differential Galois group of. In particular, we obtained an efficient algorithm for factoring . Our results can be compared to the only previous work on effective resummation that we are aware of . First of all, the current paper has the advantage that all necessary error bounds for guaranteeing a certain precision are computed automatically. Secondly, the almost linear time complexity is far better than those achieved by other numerical algorithms, like Taylor series expansions (of complexity >O(n)>, at best) or the Runge-Kutta method(of complexity )>>). Let us briefly outline the structure of this paper. In section , we begin with a survey of the accelero-summation process. The idea is to give a meaning to the evaluation of a divergent formal solution to a succession of transformations. We first make the formal solution convergent at the origin by applying a formal Borel transform. We next apply a finite number of integral transforms called ``accelerations'' followed by an a Laplace transform. At the end, we obtain an analytic solution to in a sector near the origin, which admits the divergent formal solution as its asymptotic expansion. The material in section is more or less classical. We first recall the definition of the Newton polygon of in a singularity, as well as the relationship between its slopes and the shape of formal solutions to . In particular, the steepest slope gives us information about the maximal growth rate > of solutions. We next study the Newton polygons of other operators related to , like the operators which annihilate the Borel transforms of solutions to . In section , we recall several stability properties for holonomic functions and constants, as well as their effective counterparts. In particular, we will show that the integrands involved in the accelero-summation procedure are holonomic and how to compute vanishing operators for them. Using the results from section , these operators will be seen to have the required growth rates at infinity. In sections , we show how to compute uniform bounds for the transition matrices in suitable sectors near infinity. In section , these bounds will be used for the efficient evaluation of integrals with exponential decrease. In section , the different techniques are assembled into an effective and efficient accelero-summation procedure. None of the algorithms in this paper have been implemented yet. Nevertheless, at least some of the algorithms should be implemented inside the standard library of the upcoming system and any help would be appreciated. The following notations will frequently be used in this paper: ||||||||||||\>>>|>>|>>>|K:x\0}> of , with \{>,>,>}>>>|,|\>>>| and radius >>||\>,\,R>>>||\>:\|arg z-\\|\\,\|z\|\R}> at the origin>>||\>,\,R>>>>||\>:\|arg z-\\|\\,\|z\|\R}> at infinity>>||~>>>| >>||^>>,|\>>>>| (for minors and majors)>>||^>>>,|\>>>>>|>|> L>>| with \>>>>| L>>| with >>>|> L>>| with *z>>>|>>>| along >>>>>> The operators |~>>, |^>>>, |\>>>, |^>>>>, |\>>>> are defined in sections and . The transformations >>, > and >> are introduced in sections and . Transition matrices are defined in section . In this section we survey the process of accelero-summation, give some explicit bounds for the acceleration kernels, as well as the interpretation of the accelero-summation process in terms of ``majors''. We have aimed to keep our survey as brief as possible. It is more elegant to develop this theory using resurgent functions and resurgence monomials. For a more complete treatment, we refer to. Let [[z>>]]> be the differential >-algebra of infinitesimal Puiseux series in for =z*\> and consider a formal power series solution \\=\[[z>>]][log z]> to alinear differential equation over (z)>. , the process of accelero-summation enables to associate an analytic meaning to > in a sector near the origin of the Riemann surface |\>> of , even in the case when > is divergent. Schematically speaking, we obtain through a succession of transformations: <\equation> |||||||>||||||||>||~>>>\>|||||||||^>>>>>>|>||^>\z>>>>|>|>|>|>|>||^>\z>>>>|>>>>> Each > is a ``resurgent function'' which realizes (z)=(z)> in the ``convolution model'' with respect to the -th ``critical time'' =>> (with \\>> and \\\k>). In our case, > is an analytic function which admits only afinite number of singularities above>. In general, the singularities of a resurgent function are usually located on a finitely generated grid. Let us describe the transformations |~>>, |^>>\z>> and|^>>>> in more detail. >We start by applying the to the series (z)=(z)=,r>,r>*z>*log z\\[[z>>]][log z]>. This transformation sends each >*log z> to <\equation*> (|~>> z>*log z)(\)=\-1>\(\)*log \, where (\)=1/\(\)>, and extends by strong linearity: <\equation*> (\)=(|~>> )(\)=\\>>>|\>>>>>>>*(|~>> z>*log z)(\), The result is a formal series \\1>*\[[\>>]][log \]> in > which converges near the origin of the Riemann surface |\>> of the logarithm. The formal Borel transform is a morphism of differential algebras which sends multiplication to the convolution product, |~>>(f*g)=(|~>> f)\(|~>> g)>, and differentiation > to multiplication by \>. Intuitively speaking, the Borel transform is inverse to the Laplace transform defined below. >Given p>, the function > is defined near the origin of |\>>, can be analytically continued on the axis *\>*\>\|\>>, and admits a growth of the form (\)=exp O(\|\\|/(k-k)>)> at infinity. The next function > is obtained from > by an of the form <\equation> (\)=(|^>\z>> )(\)=\\*\>*\>>(\)*,k>(\,\)*\ \, where the acceleration kernel ,k>> is given by <\eqnarray*> ,k>(\,\)>||*\>*\>*\>\*z-\**z>*\ z>>|||>>*/k>|\/k>>>>>|>(\)>||*\>*\>*\>\*z>>*\ z.>>>> For large \\>>, we will show in section below that <\equation*> >(\)\B*exp(\C*\)>) for some constants 0>. It follows that the acceleration> of > is well-defined for small > on *\>*\>>, where =\*k/k>. The set \\> of directions> such > admits a singularity on *\>*\>> is called the set of at the -th critical time. Accelerations are morphisms of differential >-algebras which preserve the convolution product. Intuitively speaking, one has |^>\z>>=\>\|^>>>>, where the Laplace transform |^>>>> is defined below. > The last function > is defined near the origin of |\>>, can be analytically continued on the axis *\>*\>\|\>> and admits at most exponential growth at infinity. The function is now obtained using the analytic <\equation> f(z)=f(z)=(|^>>> )(z)=\\*\>*\>>(\)*\\/z>*\ \. For any sufficiently small > with -\\|\\/2>, the value (z)> is well defined. The set > of Stokes directions is defined in a similar way as in the case of accelerations. The Laplace transform is a morphism of differential >-algebras which is inverse to the Borel transform and sends the convolution product to multiplication. Given tuples =(k,\,k)>, =(\,\,\)> of critical times \\\k> in >> and directions \\\\\\,\,\\\\\\\>, we say that aformal power series \\> is in the multi-direction > if the above scheme yields an analytic function sum,\> >. For any \k*\/2>, this function is defined in a sufficiently small sector near>> of the form |\>*\,\,R>>. We denote the set of accelero-summable power series of this kind by ,\>>. The set ,\>> forms a differential subring of > and the map \f> for \\,\>> is injective. If > and > are obtained from > and > by inserting a new critical time and an arbitrary direction, then we have ,\>\\,\>>. In particular, ,\>> contains =\{{z>>}}[log z]>, where {{z>>}}> denotes the ring of convergent infinitesimal Puiseux series. Assuming that each> is finite modulo>, and setting \\\\\\>, we also denote ,\>=\\>\,\>>, >=>\,\>> and=>\>>. Let > be the group of elements > with \[z>>]> and denote by =\*z>[\]> the ring of all polynomials of the form =\\>>*\> with >\\*z>>. The notion of accelero-summation extends to elements in > instead of >. Indeed, given \\,\>>, \\>, =\)>\\>, we may simply take ,\> *z>*\)(z)=(sum,\> )(z)*z>*\>. It can be checked that this definition is coherent when replacing *z>> by *)*z-k>> for some \>. \ By linearity, we thus obtain a natural differential subalgebra ,\>\\> of accelero-summable transseries with critical times> and in the multi-direction >. We also have natural analogues >> and > of >> and >. In general, the acceleration and Laplace integrands are both singular at zero and at infinity. Much of the remainder of this paper is directly or indirectly concerned with the efficient integration near infinity. This leaves us with the integration at zero. Aclassical trick is to replace the integrand by a so called . This allows us to replace the integral from zero to apoint close to zero by a contour integral around zero from 2*\*\>*u> to . We will rapidly review this technique and refer to for details. Consider an analytic germ > near the origin >> of the Riemann surface|\>> of . A for > is an analytic germ >> with <\equation*> (\)=>(\)->(\*\2*\*\>). The > only depends on the class >> of >> modulo the set of regular germs at. We call >> a . Given a regular germ >, \\>> and \>, the minor <\equation*> (\)=\(\)*\>*log \ admits the major <\equation*> >=(\)*\>*P(log \)>|\\\>>|2*\*\*\>>*\(\)*\>*P,k>(log \)>|\\\>>>>> for certain polynomials (log \)=*\*(k+1)>*log \+\> and ,k>(log \)=log \+\>. More generally, if > is locally integrable in a sector containing apoint near >>, then <\equation> >(\)=1|2*\*\>*(\)|\-\>*\ \ is a major for >. The class of >> does not depend on the choice of . Given majors >> for the > from section , we may now replace () and () by <\eqnarray*> (\)>||>>>(\)*,k>(\,\)**\ \>>|(z)=(|\>>> f)(z)>||>>>(\)*\\/z>*\ \,>>>> where >> stands for the contour (see figure below) which consists of >>> from -2*\)*\>*\> to -2*\)*\>*\> (for some small \0>), followed by >> from -2*\)*\>*\> around to *\>*\>, and \ >> from *\>*\> to*\>*\>. Using the formula () in combination with() leads to the direct expression <\equation> >(\)=(|\>,k>> >)(\)=>>>(\)*>,k>(\,\)*\ \ of >> in terms of >>, where <\equation*> >(\,\)=>(\,\)=1|2*\*\>*,k>(\,\)|\-\>*\ \. The integrals () and () further decompose into <\eqnarray*> >(\)>||>>>(\)*>,k>(\,\)*\ \+>>|||>>(\)*>,k>(\,\)*\ \>>|(z)>||>>>(\)*\\/z>*\ \+>>(\)*\\/z>*\ \.>>>> More generally, differentiating \> times >, we obtain the following formulas, on which we will base our effective accelero-summation algorithms: <\eqnarray*> >(\)>||>>>(\)*>,k>(\,\)*\ \+>>|||>>(\)*>,k>(\,\)*\ \>>|(z)>||>>>(\)* \\/z>|\ z>*\ \+>>(\)* \\/z>|\ z>*\ \.>>>> In section below, we will show that for small enough, the kernel >(\,\)> and its derivatives in> have the same order of decrease at infinity as (\,\)>. <\big-figure|||>|gr-mode||gr-line-arrows|none|gr-geometry||gr-frame|>||||||>||>||>||>|||>|||>|||>|||>|||>|||>|||>||>>|*\>*\>|>>|>>|>>|>>|>>|>>>|>>|>>|>>|>>|>>||>|>>> Illustrations of the contours for the acceleration and Laplace integrals. At the left, the contour for the direct integrals () and () using minors. In the middle, the contour in the case of majors () and (). At the right hand side, we use majors for integration at and minors for the integration at infinity, as in () and (). <\lemma> Given \\> and 0> with 2*\>, we have <\equation*> >x>*\x>*\ x\2*X>*\X>. <\proof> In the case when \0>, we have <\equation*> >x>*\x>*\ x\X>*>\x>*\ x=X>*\X>\2*X>*\X>. If \0>, then consider the function (x)=x-\*log x> and its inverse (t)>. Given 2*\>, we obtain <\eqnarray*> >x>*\x>*\ x>||(X)>>\(t)*\*\ t=(X)>>t>*\ t|1-|\(t)>>*\ t>>||>|(X)>>\t>*\ t=2*X>*\X>.>>>> <\lemma> Given \0>, \0>, \\-1> and 0> with +2|\*\>>>, we have <\eqnarray*> >x>*\\*x>>*\ x>|>|*\>*X+1-\>*\\*X>>;>>|>\\*x>>*\ x>|>|\*X>>.>>>> <\proof> For +2|\*\>>\2*+1-\|\*\>>>, the above lemma implies <\equation*> >x>*\\*x>>*\ x=*\+1|\>>>*X>>>x+1-\|\>>*\x>*\ x\*\>*X+1-\>*\\*X>>. The second relation easily follows from the first one by setting =0>. <\lemma> Let ,\,\,X\0>. Then <\equation*> X>*\*X>\|\*\>>*\+\)*X>. <\proof> This follows from the fact that the function >*\\*x>> admits its minimum at/\>>. <\lemma> Given \1> and 2*\>, we have <\equation*> >x>*\x>*\ x\4*\>*\(\)*\X> <\proof> By lemma , we have <\equation*> >x>*\x>*\ x\2*\+1>*\>+2*(2*\)>*\>, since >*\x>> admits its maximum in >. Furthermore, <\equation*> 2*\+1>*\\>+2*(2*\)>*\2*\>\2*(\>*\+1>+2>*\>)*\X>\4*\>*\(\)*\X>. The second inequality can be checked for small > by drawing the graph and it holds for large > because of Stirling's formula. <\lemma> Given \(0,1)>, \1>, \0> and [2*(\+1-\)/\]>>, we have <\equation*> >x>*\\*x>>*\ x\4*\+4|\>>*\(|\>>)*\\*X>>. <\proof> Application of the previous lemma, after a change of variables. <\lemma> Let \(0,1)> and \0>. Denote <\eqnarray*> ||*\)>>>>|>|||2>>*\>>>> and assume 14>. Then <\equation*> \|>(\)\|\>*(cos \)>>*\|\>*s>. <\proof> Let (z)=z-\*z>>. We will evaluate the integral() using the saddle point method. In the neighbourhood of the saddlepoint , we have <\eqnarray*> (s+\*\)>||-1|\>*s-|2*s>*\-|2>*>\(s+\*t)*(\-t)*\ t,>>|(z)>||\*\*(1-\)*(2-\)*z-3>.>>>> For on *]>, we also have <\eqnarray*> (z)\|>|>|1/2>>*\*\*(1-\)*(2-\)*s-3>>>|||1/2>>*)*(2-\)|s>,>>>> For \|\>, it follows that <\eqnarray*> |2>*>\(s+\*t)*(t-\)*\ t>|>|1/2>>*)*(2-\)|6*s>*\>>||>||3*s>*1/2>>*\>>||>||4*s>*\,>>>> where the last bound follows from our assumption 14>. We infer that <\equation*> \\(s+\*\)-\(s)\\|4*s>*\, whence <\equation> >>\(s+\*x)-\(s)>*\*x\\>>\|4*s>*x>*\ x=*s|1-\>>. Now let =s*(cos \)1/\>\>. We have <\equation> \>>+>> \(s+\*x)-\(s)>*\*x\2*\, since \(s+\*x)> admits a unique maximum at . Furthermore, <\equation*> \ (\*(s+\*x)>)\\*\|x\|>*cos \, for all \>. Lemma therefore implies <\equation> \>\>+>>\(s+\*x)-\(s)>*\*x\2*\*\*(s>-(cos \)**\>)>\2*\, since >\s>/cos \\2*s-1>/cos \=2/(\*\*cos \)>. Putting the relations (),() and() together, we obtain <\equation*> \|>(\)\|*\|\>*s>\*s|1-\>>+)>>\>*(cos \)>>. This completes the proof of our lemma. <\lemma> Let =k/k> and assume that =\*arg \>, >, \|u\|\\|\\|/2> and <\equation> \|\/u>\|\max>|\>,>>*\\>. Then <\equation> \|>,k>(\,\)\|\*m!|\*(cos *\|2>)>*>*\>*\|\>*(\*\/u>))>>. <\proof> We first observe that <\eqnarray*> >,k>(\,\)>||m!|2*\*\>*>(\/\>)|(\-\)>* \|\>>>|||m!|2*\*\*\>*/u>>>>(\)|((\/\)>-\)>* \|\>.>>>> For \\/u>>, we also have /\\|>\\|u\|\\|\\|/2>, so that <\equation> \|>,k>(\,\)\|\*\*\*\>*/u>>>>(\)* \|\>. Setting =(\*\)/2>, the lemmas and now imply <\eqnarray*> /u>>>>(\)* \|\>>|>|>*(cos \)>>*/u>>>(\*\)>>*\|\>*(\*\))>>* \|\>>>||>||(cos \)>*>>*\|\>*(\*\/u>))>>,>>>> because of the assumption (). Combining this bound with (), we obtain(). Let [z>]> be the set of polynomials of the form >*z>+\+P>*z>> with >,\,P>\\>> and \\\\\\>. If 0>, then we call >(P)=\\> the of at infinity and =v(P)> the of at zero. If , then >(P)=v(P)=\\>. We write >> or > when it is clear from the context whether we are working near > or . Now consider a differential operator <\equation*> L=L*\+\+L\\[z>][\](L\0), where =z*\=z*|\ z>>. The of is defined to be the set of all pairs \)\\\\> with >=(L)>\0>. The Newton polygon (see figure ) of at infinity ( zero) is the convex hull of <\equation*> {(x,\+\*y):(i,\)\supp L,0\x\i,y\0}, where =\1> ( =1>). The boundary of the Newton polygon consists of two vertical halflines and afinite number of edges. The of (the Newton polygon of) is the sequence ,\),\,(i,\)> of points with \\\i=r>, such that the -th edge of the Newton polygon is precisely the segment which joins ,\)> to ,\)>. We call <\equation*> \=-\|i-i> the of the -th edge. From the definition of the Newton polygon as aconvex hull, it follows that <\equation*> v(L)-v(L>)\\*\*(k-i) for all . We call =\=\*\> the of . <\big-figure|||>|gr-mode||gr-line-arrows|none|gr-fill-color|pastel blue|gr-line-width|2ln|gr-geometry||gr-frame|>||||||>>||||||>>|||||>>>||>>||||||||>>>|||>>||||||>>||||||>>||||>>>||>>||||>>>|||>>|||||>>> Illustration of the Newton polygons at infinity and zero of the operator +2*z*\-z*\+(7*z-3*z)*\+11*z>. Given \[z>][\]> and \\[z>]>, we define > L> to be the operator which is obtained by substituting +\> for > in . For all , we have <\equation*> (\> L)(f)=\\/z>*L(\\/z>*f). In the case when \\>, we have <\equation*> supp \> L\supp L+(\\,0). In particular, the Newton polygon of > L> and coincide, both at zero and infinity (see figure ). In general, only the slopes which are steeper than the exponent of the dominant monomial of > coincide. <\big-figure|||>|gr-mode||gr-line-arrows|none|gr-fill-color|grey|gr-line-width|2ln|gr-color|grey|gr-geometry||gr-frame|>|||||||>>|||||||||>>||||>>>||>>|||>>>|||>>||||||||||>>||||>>>||||>>|||>>>|||>>|||||||||>>||||>|>|>|>|>|>|>|>>>> Illustration of the Newton polygons at infinity of from figure and M=\+2*z+8*\+12*z-z+24*\+24*z-7*z+7*z+32*\+27*z-10*z+14*z+16>. Let \\>> and consider the transformation >:z\z>>. If >>, then <\equation*> z*|\ z>=u>*|\ u>>=>*u*|\ u>, so the transformation >> naturally extends to [z>][\]> by sending > to 1>*\>. We have <\equation*> supp \> L={(i,\*\):(i,\)\supp L}. Consequently, if <\equation*> (i,\),\,(i,\) is the outline of , then <\equation*> (i,\*\),\,(i,\*\) is the outline of > L>. In particular, > L>=\|\\|*\>. Of course, if \0>, then we understand that the roles of infinity and zero are interchanged. In figure , we have illustrated the effect of the transformation >> on the Newton polygon. <\big-figure|||>|gr-mode||gr-line-arrows|none|gr-fill-color|default|gr-line-width|2ln|gr-grid||gr-edit-grid||gr-grid-aspect|||>|gr-grid-aspect-props|||>|gr-geometry||gr-frame|>|||||||>>|||||||||>>||||>>>||>>|||>>>|||>>||||||||||>>||||>>>|||>>||||||||>>|||||||>>>|||>>||>>> Illustration of the Newton polygons at infinity of from figure and L=16*\+16*z*\-4*z*\+14*z-6*z*\+11*z>. \; Let us now consider the analogue of the formal Borel transform |~>> from section for differential operators. It is classical that the formal Borel transform satisfies <\eqnarray*> |~>(z*\ f)>||*|~> f;>>||~>(z1>*f)>||> |~> f.>>>> for z*\[[z]]>. Rewritten in terms of the operators =z*\> and >=\*\>>, this yields <\eqnarray*> |~>(\ f)>||>+1) |~> f;>>||~>(z1>*f)>||1>\>) |~> f.>>>> This induces a natural >-algebra morphism :\[z1>][\]\\[\1>][\>]>, by setting <\eqnarray*> z1>>||1>*\>;>>| \>||>+1.>>>> Each term *z*\> of an operator \[z1>][\]> gives rise to a contribution <\equation*> \(L*z*\)=\*(L*\>+c*\>+\+c) to L>, for suitable constants ,\,c\\>. In particular, <\equation*> supp \(L*z*\)\(i-j,j)+(\1,0)*\. Let ,\),\,(i,\)> be the outline of at infinity and for all , let <\equation*> |^>=|1-\>. If \\1>, then the -th edge gives rise to an edge with slope |^>> in the Newton polygon of L> at zero. If \1>, then it gives rise to an edge with slope |^>> in the Newton polygon of L> at infinity (see figure ). In addition, if > contains several terms, then the Newton polygon of L> at infinity also contains an edge with slope1>. <\big-figure|||>|gr-mode||gr-fill-color|default|gr-line-arrows|none|gr-line-width|2ln|gr-color|grey|gr-geometry||gr-frame|>||||>>|||||>>||||>>>||>>||||>>>||>>||||>>>||>>||||>>>||>>||||||||||||>>>||>>||||>>||||||||>>>||>>||||>>>||>>||||||>>||||>>>||>>||||||||>>|||||>>|||||>>||||>>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>|>||||||>>||||>>>||>>|||||>>>||>>||||||||||>>||||||>>||||>>>||>>||||||||>>>||>>|||||||>>>>> The left hand column shows the Newton polygons at infinity of the operators -*\->*\->> and L=*\->*\->*\->>. At the right hand side, we have drawn the Newton polygons of their Borel transforms L=1->*\+4->->*\+6->->*\+4->+>*\+1> and \ L=\>->*\+->->*\++>*\++>->*\++>+>*\+> at infinity (the middle column) and at zero (the right hand column). Having chosen whether we work near infinity or near the origin, let <\eqnarray*> >||[[z*\>>]][log z];>>|>||[z*\>>];>>|>||*z>)[\].>>>> Given \>, the set ={\\\:f>\0}> is called the set of of , and the number =max{\\*\:\\\,P>\0}\{0}> the of . More generally given a subvector space > of >, we denote >={\:f\\}> and >=max {\:f\\}>. The Newton polygon provides a lot of information about the structure of the subvector space \\> of formal solutions to . In the sequel, we will use the following classical consequences of the Newton polygon method: <\theorem> Let \(z)[\]>> be monic, of order and assume that > is algebraically closed. Then the equation admits a full basis of solutions in >, =r>. Moreover, each basis element may be chosen so as to have a unique exponential part.> <\theorem> Let \\\\> be the slopes of the Newton polygon of . Then <\enumerate-alpha> :f\\>}={\*\,\,\*\}>. >=\>.> Let > be an algebraically closed subfield of >. Consider the coordinates =(z,\,z)> and corresponding derivatives =(\,\,\)> ,\,z>. An analytic function in > is said to be over >, if it satisfies anon-trivial linear differential equation f=0> with \\(\)[\]> for each {1,\,n}>. Equivalently, we may require that [] f> is a finitely generated module over (\)>. The second criterion implies the following classical proposition : <\proposition> Let and be holonomic functions in >. Then <\enumerate-alpha> Any rational function in (\)> is holonomic. is a holonomic function. is a holonomic function. f> is a holonomic function for all {1,\,n}>. Given a point > on the Riemann-surface > of , the specialization =u)> is holonomic. Given algebraic functions ,\,g> over (\)> the composition <\equation*> f\(g,\,g):\\f(g(\),\,g(\)) is holonomic. <\proof> The property () follows from the inclusion <\equation*> \[](f*g)\(\[] f)*(\[] g) and the fact that the dimension of the right-hand side is finite over (\)>. All other properties are proved in a similar way. A more interesting closure property is the stability under definite integration. Consider a holonomic function in > and a point > on its Riemann surface >. Let > be the Riemann surface of the specialization =\)>, where =(z,\,z)> and =(u,\,u)>. Consider a path :(0,1)\\> on > with possibly singular end-points. If > is singular at \{0,1}>, then we assume that there exists a neighbourhood > of >, such that ,\):(0,1)\\> is a path on > for all \\> and \>f(\,\(t))=0>. We now have: <\proposition> The integral )=>f(\,z)*\ z> is a holonomic function. <\proof> It suffices to show that is holonomic in a neighbourhood of >. Let =(p,\,p)\\> be such that <\equation*> \[] f\\[]\> f=(> f:0\k\p,\,0\k\p). Let > and > be the specializations of in > at the end-point starting point of >. Notice that > and > are defined in a neighbourhood of >. Setting =(\,\,\)>, the space <\equation*> \\(\[]\> f)+(\[]\> f) is finite dimensional over (\)>. For each \\> and \>, let <\equation*> I;l>=>z*(> f)(\)*\ z. The differential equation for in > yields a finite relation <\equation*> =0>>C,i>*I,k);l+i>=0, with ,i>\\(\)> for all >. Partial integration also yields a relation <\equation*> I,i);l>->*I,i+1);l+1>\\ for every . Combining these relations, we obtain a non-trivial relation <\equation*> A,0>*I,0);l>+\+A,q>*I,0);l-q>\\, where ,0>,\,A,q>\\(\)[l]>. For which are not a root of ,0>>, we thus obtain a recurrence relation for ,0);l>>. Therefore, the space <\equation*> \=(I;l>:0\k\p,\,0\k\p,k=0,l\\)+\ is again finite dimensional over (\)>. We conclude our proof with the observation that >is stable under >. Let us now turn our attention to the one-dimensional case. Given a monic differential operator \(z)[\]>, we denote by > the space of solutions to the equation at a given point. In the case of formal solutions at zero or infinity, we will also write =\>>. Inversely, given a vector space of formal series, analytic germs or analytic functions on some domain, we say that \(z)[\]> on if . We say that is a for if =V>, in which case is said to be . Given two operators \(z)[\]>, we know by proposition that there exists an operator \(z)[\]> which vanishes on +\>. It turns out that the operator L> of minimal order with this property is actually avanishing operator for +\>. A similar property holds for the operators L>, >> and >> of minimal orders which vanish on (\*\)>, >, (\\\:\=z)>, where \>>. What is more, there exist algorithms for computing these vanishing operators. In this section, we will briefly recall these algorithms, and thereby give an effective proof of lemma below. The algorithms are all more or less classical, but we could not find areference where they are all described together. We will also prove a slightly weaker result for the operation() which associates a major to a minor. <\lemma> Let be monic differential operators in (z)[\]> and \>>. <\enumerate-alpha> There exists a unique monic L\\(z)[\]> with L>=\+\>. There exists a unique monic L\\(z)[\]> with L>=(\*\)>. There exists a unique monic >\\(z)[\]> with >>=\>. There exists a unique monic >\\(z)[\]> with >>=(\\\:\=z)>. We notice that L> coincides with the least common left multiple of and in the Ore ring (z)[\]>. Indeed, any common left multiple vanishes on +\> and any operator which vanishes on > > right divides . One may thus compute L> using any classical algorithm for the computation of least common left multiples, such as the Euclidean algorithm. Given formal solutions and to and , the product and its successive derivatives *G+F*G>, *G+2*F*G+F*G>, may all be reduced using the relations . In other words, \\=r,j\s>\(z)*F*G>, for all , where and denote the orders of . Consequently, there exists a (z)>-linear relation among ,(F*G)> in >. By linear algebra, we may compute the monic operator of smallest order with in >. Using an adaptation of the proof of , we will show that L>. Let ,\,f> and ,\,g> be fundamental systems of solutions to at a non-singular point, considered as elements of the field > of convergent Laurent series at this point. Let ,\,C> and ,\,D> be formal indeterminates. Then the substitutions <\eqnarray*> >|>|*f+\+C*f(i\r)>>|>|>|*g+\+D*g(j\s)>>>> yield an isomorphism <\equation*> \:\=\[F,\,F,G,\,G]\\=\[C,\,C,D,\,D]. Now consider a monic operator \(z)[\]> of smaller order than . Using the relations , we may rewrite as a non-zero element of \\>. It follows that (N(F*G))\0>. Consequently, there exist constants ,\,c,d,\,d\\> with (N(F*G))(c,\,c,d,\,d)\0>. Setting *f+\+c*f> and *g+\+d*g>, we infer that 0>, so is not a vanishing operator of (\*\)>. This shows that is indeed the differential operator of lowest order which vanishes on (\*\)>. The proof that (\*\)> is closed is based on differential Galois theory: when computing the solutions to operators in (z)[\]> in a suitable Picard-Vessiot or D-algebraic closure >, any differential automorphism of > over (z)> leaves both > and>, whence (\*\)>, invariant. But, given a finite dimensionalsubvector space> of> which is invariant under any differential automorphism, we may explicitly construct an operator \\(z)[\]> with >=\>, )>. This shows that (\*\)> is closed. If , then is right divisible by >, so we must have >*\>. Otherwise, the least common multiple of and > in (z)[\]> has order , so there exist operators of order and of order and with >. These operators may again be computed using a modified version of the Euclidean algorithm. Since >>=dim \> and =0>, we have >=B>. In order to compute the operator >>, it is more convenient to work with the derivation> instead of >. It is easy to see that this changes the definitions operators L>, L>, >> and >> only up to a multiple by a power of . Given a primitive -th root of unity \\>, let > L> be the operator with > L)(z)=L(\*z)> for all . Then we have > L)(f\(\*z))=L(f)\(\*z)> for all , whence (\*z)> is a root of> L> if and only if is a root of . By what precedes, it follows that =L\\> L\\\\> L> satisfies >=\+\\(\*z)+\+\\(\*z)>. Furthermore, > \=\> implies that \\(z)> for all . Consequently, \\\(z)[\]> and we conclude that >=\ \>. Consider the operation > which associates <\equation*> >=\ =1|2*\*\>*(\)|\-\>*\ \ to >. We have <\eqnarray*> >|| )+*\>*(u)|\-u>-(0)|\>>>|(\*)>||*\ -*\>*(\)*\ \.>>>> Given a relation =0> for >, where \[\][\]> has order , we thus obtain a relation <\equation*> >=)|\*(\-u)> for some polynomial with transcendental coefficients. Setting <\equation> >\\ [\*(\-u)*L], it follows that > >=0>. By theorem , we notice that the growth rate of >> at zero or infinity is the same as the growth rate of > at zero infinity, since *\> is stable under differentiation and integration without constant term, for each \\>. Lemma admits several useful consequences for what follows. <\corollary> If the coefficients of and are analytic on an open or closed subset > of>, then the same thing holds for the coefficients of L>, L> and >>. <\proof> Given functions ,\,h>, let ,\,h>> denote their Wronskian. If ,\,h> is a basis of the solution space > of a monic operator \(z)[\]>, then we recall that the operator is determined in terms of ,\,h> by the formula <\equation> L f=,\,h>|W,\,h>> In particular, if ,\,h> are analytic on >, then so are the coefficients of , as is seen by expanding the right-hand side of (). It now suffices to apply this observation to L>, L> and >>. <\corollary> Let \(\)[\]> be monic and \>>. Then <\enumerate-alpha> L>=\\\>. L>=\*\>. >>=\>. >>=\>. <\proof> This follows directly from the lemma together with theorem . Consider a monic differential operator +L*\+\+L> whose coefficients are analytic function on a Riemann surface >. Given a point \> it is well known that there exists a unique <\equation*> \=>|>|>>>>> of analytic solutions to at with the property that =\> for all . Since is linear, an arbitrary solution to is uniquely determined by the vector <\equation*> F(z)=>|>>|(z)>>>>> of its at by <\equation> f=\*F(z). More generally, given a path z> on > from to another point >, the values of the analytic continuations of ,f> along the path also linearly depend on . Consequently, there exists a unique scalar matrix z>=\z>> with <\equation> F(z)=\z>*F(z). We call z>> the for along the path z>. Dually, we have <\equation> \=\>*\z>, because of (). Also, if \z> is a second path, then <\equation> \z\z>=\\z>*\z> and in particular <\equation> \\z>=\z>1>. The notion of transition matrices can be generalized to allow for paths which pass through regular or irregular singularities of the operator . In order to do this, we start by generalizing the idea of a canonical fundamental system of formal solutions > in the singularity. In the case when the coefficients of are in (z)>, then theorem tells us that there exists a fundamental system of solutions at . This result is refined in , where we show how to compute a canonical basis ,\,f> of so called ``complex transseries'' solutions, which is uniquely characterized by suitable asymptotic properties. In particular, <\itemize> Each > is of the form =\*z>*\> with \\>, \\> and \\>. Whenever =\> and \\+\> for j>, then )+\-\>=0>. Notice that there are other definitions of ``canonical'' systems of solutions , which share the property that they can be computed effectively in terms of the operator . Given a notion of a ``canonical system of formal solutions at a singularity'', we obtain a dual notion of ``initial conditions at '' for arbitrary formal solutions, via the relation(). Now assume in addition that, for a suitable sectorial neighbourhood \\> of , we are able to associate a genuine analytic function (f)> to any formal solution at . Then either() or() yields a definition for the transition matrix along a straight-line from to >. In general, the association :f\\(f)> depends on one or several parameters, like the non-singular directions in the accelero-summation procedure. We will now show how to encode these parameters in a suitable generalization of a broken-line path. Assume from now on that \(z)[\]>. We define a as being a path \z\\\z>, where each > is either <\itemize> A non singular point > in >. A regular singular point \\> with a direction > (and we denote =(\)>>). An irregular singular point \\> with critical times > and directions > (and we denote =(\),\>>). Furthermore, we assume that >>\\,\>> (where >>(\)=\>(\+\)> for > with -k*\\|\\/2>), 1>-\)-k*\\|\\/2>. Moreover, for each l>, the open ball with center > and radius -\\|> is assumed to contain no other singularities than >. If the > are all non singular or regular singular, then we call \z\\\z> a . Now given an irregular singular point \\>, such that >>\\,\>> for critical times > and directions >, we define the transition matrix <\equation*> \,\>\z>=,\> f,0>>(z)>|>|,\> f,r-1>>(z)>>|>||>>|,\> (f,0>>)(z)>|>|,\> (f,r-1>>)(z)>>>>>, for any with )-k*\\|\\/2> and such that is sufficiently close to >. For regular singular points \\>, a similar definition was given in. In view of () and (), we may extend this definition to arbitrary singular broken-line paths. In particular, it can be checked that the Stokes matrices for are all of the form <\equation*> \,\,\,\>=\,\>\\+\\\,\>>=\,\>\\+\>1>*\,\>\\+\>. Notice that this definition does not depend on the choice of >. In a similar way as in, it is also possible to construct a suitable extension |^>> of > with ``irregular singular points'', in such a way that singular broken-line paths may be lifted to|^>>. However, such an extension will not be needed in what follows. It is well known that the theory of Gröbner bases generalizes to partial differential operators in the ring (z,\,z)[\,\,\]>. Consider a zero-dimensional system of such operators given by a Gröbner basis =(L,\,L)>. Let > be the set of tuples,\,k)>, such that \k,\,l\k> holds for no leading monomial >*\*\>> of one of the >. We may enumerate ={\,\,\}>, with \\\\> for a fixed total ordering >> on the monoid>. Given a non-singular point \\> for >, there again exists a unique <\equation*> \>=>>|>|>>>>>> of analytic solutions to f=0> at with the property that >f=\> for all . Also, an arbitrary solution to is uniquely determined by the vector <\equation*> F(\)=> f(z)>>|>>|> f(z)>>>>> of its initial conditions at > by *F(z)>. Consequently, the definitions and properties(--) naturally generalize to the multidimensional paths \\> which avoid the singularities of >. Recall that > and |\>> stand for the open and closed disks of center and radius . Aconstant > in > is said to be over > if there exists alinear differential operator +L*\+\+L\\(z)[\]> and avector of initial conditions \\>, such that the > are defined on |\>> and =f(1)>, where is the unique solution to with (0)=v> for n>. We denote by > the set of holonomic constants over >. <\proposition> \; <\enumerate-alpha> > is a subring of >. Let be a linear differential operator of order in (z)[\]>. Then >\(\)> for any non singular broken-line path > with end-points in >. Let =(L,\,L)> be a Gröbner basis for a zero-dimensional system of differential operators in (\)[]>. Then for any non singular broken-line path > with end-points in >, we have >>\(\)>. <\proof> Consider holonomic constants =f(1)> and =g(1)>, where and are solutions to and with initial conditions in > > and where the coefficients of and are defined on |\>>. By the corollary, the coefficients of L> are again defined on |\>> and *\=h(1)>, where is the unique solution with initial conditions (0)=*f(0)*g(0)\\> for m*n>. A similar argument shows the stability of> under addition. As to (), we first observe that the transition matrix 1>> along the straight-line path from to has holonomic entries, provided that the coefficients of are defined on |\>>. Indeed, by corollary , the coefficients of the monic operators >> with solution spaces > are defined on |\>>. Using a transformation (\-\)*z+\> with \\> and \\>, it follows that \\>> has holonomic entries whenever the > are defined on the closed disk |\>,\|\-\\|>>. Now any broken-line path > is homotopic to a broken-line path \\\\> such that the > are defined on the closed disks |\>,\|\-\\|>>. From (), we therefore conclude that >=\\\>*\*\\\>> has holonomic entries. As to the last property, we first notice that the function +t*\)> is holonomic in for any fixed > and > in >. In a similar way as above, it follows that the multivariate transition matrix from section along a straight-line path \\> has entries in> for sufficiently close > and > in >. Since any non singular broken-line path is homotopic to the finite composition of straight-line paths of this kind, we conclude by the multivariate analogue of() and(). A number > in > is said to be a over > if there exists a linear differential operator +L*\+\+L\\(z)[\]> and a vector of initial conditions \\>, such that the > are defined on > and =lim1>f(z)>, where is the unique solution to with (0)=v> for n>. We understand that the limit 1> is taken on the straight-line path from to . If is regular singular at , then we call > a over >. We denote by > the class of singular holonomic constants over > and by > the class of regular singular holonomic constants over >. <\proposition> \; <\enumerate-alpha> > is a subring of >. > is a subring of >. Let be a linear differential operator of order in (z)[\]>. Then >\(\)> for any regular singular broken-line path > as in section . Let be a linear differential operator of order in (z)[\]>. Then >\(\)> for any singular broken-line path > as in section . <\proof> Properties () and () are proved in a similar way as above. In view of(), it suffices to prove () and () in the cases of paths of the form >\\+z> or/and ,\>\\+z>. The first case is treated in a similar way as above, so let us focus on the second case. Without loss of generality we may assume that =0>. Now, as will be shown in detail in section , the matrix ,\>\z>> can be expressed as a product of matrices whose entries are either in >, or of the form <\equation> >*\>*\>|^>(\)*>,k>(\,a)*\ \, or <\equation> >*\>*\>|^>(\)*(\\/z>)*\ \, where ,b,z\\>, \> and |^>> is holonomic with initial conditions in >. Moreover, > may be chosen as large as desired. By the results from section, the integrands are all holonomic, with initial conditions in > at >. Modulo a suitable change of variables of the form =>>, we may therefore reinterpret () () as the limit in of a holonomic function > on > with initial conditions in> at. We still have to prove that this limit can be rewritten as the limit of a holonomic function on > with initial conditions in > at . Now given the equation =0> for >, let ,\,f> be the fundamental system of solutions for at , so that <\equation*> \=\(0)*f+\+\(0)*f. Since (0),\,\(0)> are in >, we have (0)=lim1> g(z)> for suitable holonomic functions > on > with initial conditions in > at and regular singularities at . Now <\equation*> lim1>\(z)=lim1>(f*g+\+f*g)(z), where *g+\+f*g> is a holonomic function on > with initial conditions in > at the origin. Consider a linear differential operator <\equation*> L=\+L*\+\+L whose coefficients are analytic functions on an open or closed subset > of >. We will give an explicit formula for the transition matrix >=\>> along apath> in >. Let us first rewrite the equation as a first order system and give an alternative characterization for the transition matrix. Let <\equation*> M=|||>>|>||>|>||>||>|L>|L>|>|L>>>>> Then the differential equation <\equation> \=M*\ admits a unique solution > with (\)=I>. Given a path \\> in >, it is not hard to see that \\>> coincides with the analytic continuation of > along \\>. Given an analytic function on >, we will denote by >f> the unique analytic function on > given by <\equation*> >f(\)=>>f(\)*\ \. Then the system () with our initial condition admits a natural solution <\equation> \\\>=I+>M+>M*>M+\(\). We will show below that this ``integral series'' indeed converges when \\> is a straight-line path. In fact, using a similar technique, one can show that the formula is valid in general, but we will not need that in what follows. Let (\,\)> and (\,\>)> denote the spaces of continuous >-valued >>-valued functions on >. Given matrices and of the same sizes and with coefficients in (\,\)> (\,\>)>, we say that is by , and we write B>, if <\equation*> \|A(\)\|\B(\) for all . Given majorations B> and \>, we clearly have majorations <\eqnarray*> >|>|>>|>|>|>>>> Assuming that every point in > can be reached by a straight-line path starting at >, we also have <\eqnarray*> >>A>|>|>>B,>>>> where <\equation*> >B(\)=-\\|>B(\+-\|\|\-\\|>*\)*\ \. Assume now that is bounded on >. Then there exist constants ,\,\\0> with <\equation*> M\B=|||>>|>||>|>||>||>|>|>|>|>>>>> and we may assume without loss of generality that admits pairwise distinct eigenvalues. From the rules (), () and (), it follows that <\equation*> \\\>\I+>B+>B*>B+\(\). The right-hand side of this majoration can be rewritten as (\|\-\\|)>, where > is the unique solution on >> to the equation <\equation*> \=B*\, such that (0)=I>. Now let and be matrices with <\equation*> B=U1>*D*U, where <\equation*> D=>||>>||>|>|>||>>>>>. Then we have <\equation*> \(x)=U1>**x>>||>>||>|>|>||*x>>>>>>*U. This shows in particular that () converges when \\> is a straight-line path, since it suffices to replace > by a compact convex subset which contains a neighbourhood of \\>. Given an operator with coefficients in (\>)> which are bounded at infinity, it is not hard to explicitly compute a sector |\>,\,R>>> with \\/2> on which the > have no poles and a majorating matrix with coefficients in >. The aperture > may chosen as close to/2> as desired. Then the results from the previous sectionyield: <\theorem> There exists an algorithm which, given an operator <\equation*> L=\+L*\+\+L\\(\>)[\] with =O(1)> for all at infinity, computes a sector |\>,\,R>>> and constants \\>> with <\equation*> \<\|\|\>\\\>\<\|\|\>\K*\*\|\-\\|> for all straight-line path inside |\>,\,R>>>. More generally, given an operator \(\>)[\]> of growth rate \0>, the operator =\> L> has growth rate one and we have <\equation*> \\\>>=\>\(\)>> for all straight-line paths \\> whose image under \\>> is homotopic to the straight-line path >\(\)>>. Moreover, after replacing > by *\> in > and dividing by a suitable power of>, we observe that > fulfills the conditions of theorem . We thus obtain: <\theorem> There exists an algorithm which, given an operator <\equation*> L=\+L*\+\+L\\(\>)[\] with growth rate \0> at infinity, computes a sector |\>,\,R>>> and constants \\>> with <\equation*> \<\|\|\>\\\>\<\|\|\>\K*\*\|\>-\>\|> for all straight-line path inside |\>,\,R>>>. <\remark> In fact, the hypothesis that \\> is a straight-line path is not really necessary in theorem. With some more work, one may actually consider sectors of|\>> at infinity with aperture larger than /2>. In theorem, this allows you to impose the aperture of> to be as large as desired. Consider an operator <\equation*> L=L*\+\+L\\[\][\] with growth rate \0> at infinity. Let |\>,\,R>>> be a sector of aperture \\/2> such that > does not vanish on |\>,\,R>>> and such that we have a bound <\equation> \<\|\|\>\\\>\<\|\|\>\K*\*\|\>-\>\|> for all ,\\|\>,\,R>>>. Let <\eqnarray*> >|||1-sin \>*R,>>>> so that the ball centered at )*\*\>> with radius > is just contained in |\>,\,R>>> (see figure), and let \\*2>> be a fixed constant of small bit-size, with \\\=1+\/(R+\)>. <\big-figure|||>|gr-mode||gr-fill-color|white|gr-as-visual-grid|off|gr-color|>|gr-geometry||gr-frame|>|gr-text-halign|>|gr-text-valign|>|gr-grid||gr-grid-old||1>|gr-edit-grid||gr-edit-grid-old||1>|||||>>|||>>||>>|||>>|||>>|||>>|||>>>||>>|||>>>||>>|||>>>||>>||>|>>|||>|||>>|||>>||>>||>|>>||\>,\,R>>>|>>|>|>>|||>>|||>>>|||>>|||>>|||>>>||>>|||>>|||>>>|||>>||>>>>> The sector |\>,\,R>>> and the associated constants , >, > and >. Let ,\\\>*\*\>> with \|\\|\\|\R+\> and \0>. Assuming that *\>,\,\,\\\>, we may now use the algorithm below in order to approximate \\>> at precision>. The computation of |~>\\+\+\*(\-\)> is done using the binary splitting algorithm from . <\with|par-par-sep|2pt> <\algorithm|approx,\,\)>> \; ,\,\\\> as above a matrix |~>> with |~>-\\\>\<\|\|\>\\> <\body> \|\\*\|\\|> <\indent> Let \> be minimal with *(\>-1)*\|\\|>>*|1-\>\|2>>, where =-\\||\*\|\\|>> Consider the expansion \\+t>=\+\*t+\*t+\> Compute |~>\\+\*(\-\)+\+\*(\-\)> at precision /2> Return |~>> <\indent> Let \K*\*\|\>-(\*\)>\|>> Compute |~>\(\,\*\,\/(2*M))> Compute |~>\>(\*\,\,\/(2*\<\|\|\>|~>\<\|\|\>))> Return |~>*|~>> <\theorem> <\enumerate-alpha> The algorithm > is correct. Let \|>,\log \)> and let be the sum of the bit-sizes of > and >. Then the running time of the algorithm is uniformly bounded by n*(log n+s))>. <\proof> The correctness of the algorithm in the ``single-step case'' when \|\\*\|\\|> follows from() and Cauchy's formula, since <\eqnarray*> |~>-\\\>\<\|\|\>>|>|k>\<\|\|\>\\<\|\|\>*\|\-\\|>>||>|k>K*\*\|(\*\)>-\>\|>*-\\||\|\*\\|>>>|||*(\>-1)*\|\\|>>*|1-\>.>>>> In the ``multi-step case'' when \|\\*\|\\|>, we have <\eqnarray*> \*\\\>*\\\*\>-|~>*|~>\<\|\|\>>|>|\*\\\>*(\\\*\>-|~>)\<\|\|\>+\<\|\|\>(\*\\\>-|~>)*|~>\<\|\|\>>>||>|*\<\|\|\>\\\*\>-|~>\<\|\|\>+\<\|\|\>\*\\\>-|~>\<\|\|\>*\<\|\|\>|~>\<\|\|\>,>>>> and the result follows by induction. As to the complexity analysis, let be minimal such that \|*\\\|\\|> and denote <\eqnarray*> >||*\(i\l)>>|>||.>>>> Then the recursive application of the algorithm gives rise to single-step cases for each \\>> with l>. We have \|)=O(log n)> and claim that the precision > at which we approximate each \\>> satisfies \\/(2*M)>, where *\|\>-\>\|>>. Indeed, using induction over , this is clear in the case when . In the multi-step case, we have \M> and |~>\<\|\|\>\M=K*\*\|(\*\)>-\>\|>>. Hence, \\>> is approximated at precision /(2*M)\\/(2*M)>. The induction hypothesis also implies that each \\>> is approximated at precision \\/(2*M)>, where =\/(2*M)> and =K*\*\|\>-(\*\)>\|>>. We conclude that \\/(2*M)=\/(2*M*M)=\/(2*M)>. Having proved our claim, let us now estimate the cost of each single-step approximation of \\>> at precision \\/(2*M)>. Since \\(\-1)/\\1>, the minimal satisfies <\eqnarray*> ||\log |2*M*\*(\>-1)*\|\\|>>>>>|||log \)+O(l)+O(log M)+O(\|\\|>)>>|||>>> Furthermore, the entries of |~>> are -digit numbers, since <\equation*> \\\>\K*\*\|\>-\>\|> and the size of > is bounded by . By a similar argument as in the start of section 4.1 of , it follows that the >-approximation of |~>> is computed in time using binary splitting. Since , the overall running time is therefore bounded by n*(log n+s))>. Consider a second differential operator \\[\][\]> with growth rate > at infinity. Let be a solution to f=0> with initial conditions in > at a point \\> with =\> and \|\R+\>. Assume that satisfies a bound <\equation> \|f(\)\|\K*\\*\|\\|>> on ,\*\>*\]>, where ,\\0>. Our aim is to compute <\equation*> \=>*\>*\>f(\)*\ \. Now the primitive <\equation*> f(\)=>>f(\)*\ \ satisfies the equation \)(f)=0>, where the operator \\(\1> \)*\\\[\][\]> has growth rate > at infinity. Moreover, f> admits initial conditions in > at >. Assuming that we chose \> and that the bound () holds for the transition matrices associated to , we may now use the following simple algorithm for the approximation of >. <\with|par-par-sep|2pt> <\algorithm|integral_approx)>> \; \\>> an approximation |~>> for > with |~>-\\|\\> <\body> Let be the vector of initial conditions for f> at >, so that f(\)=\\\>*I> Take \\> with =\> such that \|>>K*\\*t>>*\ t\|\\/2> Return ,\,\/(2*\<\|\|\>I\<\|\|\>))*I> In the case when \1>, we notice that <\equation*> >K*\\*t>>*\ t=>>>|\*t>>*\\*t>*\ t\>>>|\>*\\*t>*\ t=|\*\>*\\*T>> for all 1>, so we may take <\equation> \|\\|=maxlround /(\*\*\)),0)|\>|\>,1, where is the largest number in >*{0,\,2-1}> below . In the case when \1>, we may use lemma to replace the bound () by a bound of the form <\equation*> \|f(\)\|\K*\\*\|\\|>>\K*\|\\|>*\\*\|\\|>>, with \\\>. Then <\equation*> >K*\\*t>>*\ t\>K*t>*\\**t>>*\ t\>>>|\>*\\*t>*\ t=|\*\>*\\*T>> and we may take <\equation> \|\\|=lround / (\*\*\)),0)|\>|\>. For both formulas () and (), we have \|=O(log \|\>)>. Applying theorem , it follows that <\theorem> The algorithm > is correct and its running time is bounded by n)>, where log \>.> Let us now show how to put the results from the previous sections together into an accelero-summation algorithm for holonomic functions. Let \\> be a formal solution with initial conditions in > at the origin to the equation with \[z][\]>. We will first show how to determine the critical times \\\k> in >> and the Stokes directions at each critical time. Having fixed \\\\\\,\,\\\\\\\>, we next detail the effective acceleration-procedure and show how to efficiently evaluate ,\> > in asector close to the origin. Without loss of generality, we may assume that the valuation of at zero is larger than the degree of in . Indeed, it suffices to replace by > and by L> for a sufficiently large . Let \\\\> be the non-horizontal slopes of the Newton polygon of at the origin. We take =1/\,\,k=1/\>, so the critical times are =z>,\,z=z>>. For example, in figure , the critical times are => and =z>. > and >>>For each critical time >, let us show how to compute vanishing operators for > and >>. Let \>> be relatively prime with =a/b>. Since d>, we notice that the valuation of > in =0> is larger than one. <\enumerate> We first compute >\\[z][\]> and L>\\[z][\]>. We may reinterpret L>> as an operator in [z][\]> and notice that L>)(f)=0>. Let be minimal, such that n>*\ L>\\[z1>][\]>. We compute the Borel transform =\(zn>*\ L>)\\[\1>]*[\]>. Since =0>(f)\1>, we formally have =0>. In fact, since the accelero-summation process preserves differentially algebraic relations, we will also have =0>. Compute >> with > >=0> using the procedure from section . For our accelero-summation process to work, it will suffice to avoid the non-zero singularities of the operator >> at each critical time >. In other words, denoting by >> the order of >>, we take ={arg u:u\\>,>>>(u)=0}>. > and >>>Given a critical time >, let us now study the growth rates of > and >> at zero and infinity. By corollary , and with as above, the slopes of the Newton polygon of L>> are *k=k/k,\,\*k=k/k>. By section and formula(), itfollows that the non-horizontal slopes of the Newton polygons of > and>> at theoriginare <\equation*> |k-k>\\\|k-k>. In particular, if , then >> is regular singular at and shows how to compute the values of >> in the neighbourhood of >>. We also infer that the non-horizontal slopes of the Newton polygon of > and >> at infinity are <\equation*> |k-k>\\\|k-k> and possibly 1>. In particular, if p>, then the growth rate of >> at infinity is |k-k>>. In view of theorem, we may thus apply |\>,k>>> to >> (see below for further details). Also, if , then the growth rate of >> at infinity is zero or one and theorem shows that we are allowed to apply |\>>>> to >>. Given a critical time > with p> and =k/k>, consider the acceleration kernel <\eqnarray*> >,k>(\,\)>||1|2*\*\>*,k>(\,\)|\-\>*\ \>>|||>*\>*\>*t-\*t>>|\-\>*\ \*\ t>>>> The choices of > and will be detailed in the next section. In order to compute (), we need an equation for >> in >, of growth rate )=k/(k-k)> at infinity. Setting <\equation*> \(t)=1|2*\*i>**t>|\-\>*\ \, we have <\equation*> \(t)=\*\(t)--1|2*\*\*t>, whence -\*t*\)=u*(t*\-\*t*\)> and <\equation*> \ \=t*\-((\+u)*t-2)*\+(u*\*t-(2*\+u))*\+u*\*f=0 By looking at the Newton polygon, we observe that > has growth rate at >. Now <\eqnarray*> >,k>(\,\)>||*\>*\>*\>\(t)*\*t>>*\ t>>|||*\*\>*>(\t)-1>*\((\t)>)*\*t>*\ t,>>>> for a suitable contour >. Applying a suitable ramification, followed by >> and 1>> to>, we obtain avanishing operator > \ for t)-1>*\((\t)>)>, with growth rate> at infinity. Although () is not really a Borel transform (at >), it does satisfy the formal properties of a Borel transform. In other words, >=\ \1> A> is a vanishing operator for>> with respect to >, of growth rate )> at =\>. We finally need equations for the integrands of () and(). If p>, then we have shown above how to construct avanishing operator >> for >,k>> at infinity. In section , we have also shown how to construct a vanishing operator >)>> for each >,k>>. It follows that |\>=(>)>\>> and |^>=(>)>\> are vanishing operators for the first and second integrands in (). Moreover, the operator >)>\> has growth rate /(k-k)> at infinity, by lemma . Similarly, |\>=(\+z1>)>\>> and |^>=(\+z1>)>\> are vanishing operators for the first and second integrands in(), and +z1>)>\> has growth rate at infinity. Assume now that \\,\,\\\> are fixed non singular directions with *\>,\,\*\>\\>. In order to approximate for close to >>, we first have to precompute a certain number of parameters for the acceleration process, which do not depend on the precision of the desired approximation for . In particular, we will compute a suitable sector > near the origin, such that the effective accelero-summation procedure will work for every \>. Besides >, for each critical time >, we precompute <\itemize> The operators >, >>, |^>> and |\>> from the previous section, for >\order(>)> if p> and r\order(L)> if . The starting point \|\>> for >> and >> in () (). If 1>, then we will require that =k*\/k>. A sector =|\>,\,R>>> near infinity as in section . The point =R*\*\>/(1-sin \)\|\>>, which corresponds to the center of the ball in figure . A point > above > such that >(\,\)=>>(\,\)>, for p>. Let us show more precisely how to do this. >>If > is the smallest non-zero singularity of >>, then we may take> arbitrarily with \|\\>. By construction, > is (at worst) regular singular at , whence so is>>, as we see from (). From , it therefore follows that (a)> admits an n)>-approximation algorithm for each \>. >, > and >>Given p>, and setting =k/(k-k)>, we use theorem to compute a sector =|\>,\,R>>> and constants , > with <\equation*> \<\|\|\>\\\>>\<\|\|\>\K*\*\|\>-\>\|>\K*\*\|\>\|> for all straight-line paths \\> in >. By lemmas and , we may compute a subsector =|\>,\,R>>\\> and small > and > with =arg u=k*\/k>, such that we have a bound <\equation*> \|>>,k>(\,a)\|\K*\*\|\>\|>(\\\\) for all >> and all \\>. We notice that >,k>(\,a)> is regular singular at the origin (for the same reason as >> above) with initial conditions in >. By , we thus have n)>-approximation algorithms for >,k>(\,a)> for any \|\>>> and \>. > and >>By theorem , we may also compute a sector =|\>,\,R>>> and constants , > with <\equation*> \<\|\|\>\\\>>\<\|\|\>\K*\*\|\-\\|>\K*\*\|\\|> for all straight-line paths \\> in >. Choosing > sufficiently small and > sufficiently large, we obtain a subsector =|\>,\,R>>\\> with <\equation*> \|(\\/z>)\|\K*\*\|\\|> for all r>, \\> and \\=|\>,\,R>>, with > as close to |2>> as desired. > For each {1,\,p}> and >>, let |\>> be the unique solution to >(|\>)=0> with |\>(a)=\> for all >>. Using the analytic continuation algorithm from , we may efficiently evaluate all derivatives of |\>> and its minor |^>> at any non-singular point above >. For each r>, we also denote by > the unique solution to =0> with (z)=\> for all r>. Given p> and >>, there now exist n)>-approximation algorithms for the integrals. <\eqnarray*> >||>>|\>(\)*>,k>(\,a)*\ \;>>|>||>>|^>(\)*>,k>(\,a)*\ \;>>|>||>*\>*\>|^>(\)*>,k>(\,a)*\ \.>>>> Indeed, the first two integrals can be approximated using the algorithm from, applied to the operators *|\>> and |^>>. The last one is computed using the algorithm . Notice that the path in the second integral consists of a circular arc composed with a straight-line segment of constant argument. We regard the numbers <\eqnarray*> >||+\+\>>|||>>|\>(\)*>,k>(\,a)*\ \+>>|^>(\)*>,k>(\,a)*\ \>>>> as the entries of a matrix <\equation*> \=>|>|>-1>>>|>||>>|>-1,0>>|>|>-1,>-1>>>>>>. By construction, we thus have <\equation> |\>,k>>*|\>>|>||\>>-1>>>>>>=|\>>|>||\>>-1>>>>>>*\*. Similarly, if , then there exist n)>-approximation algorithms for <\equation*> \=>>|\>(\)*(\\/z>)*\ \+>>|^>(\)*(\\/z>)*\ \, and these numbers again form the entries of a matrix >. By construction, we have <\equation> |\>>*|\>>|>||\>>-1>>>>>>=>|>|>>>>>*\*. Now we already observed that the algorithms from provide n)>-approximation algorithms for the entries of the vector <\equation*> |\>=>(a)>>|>>|>>-1)>(a)>>>>>. From () and (), it follows that <\equation*> \*\*\*\*|\>=(z)>>|>>|(z)>>>>> and the entries of this vector admit n)>-approximation algorithms. Summarizing the results from the previous sections, we have proved: <\theorem> <\enumerate-alpha> There exists an algorithm which takes \[z][\]> with an irregular singularity at on input and which computes the critical times =>,\,z=>> for , together with the sets of singular directions ,\,\> modulo >. In addition, given \k*\/2>, \\\\,\,\\\\\> with *\>,\*\>,\,\*\>\\>, the algorithm computes a sector |\>*\,\,\>> with \\>> to be used below. There exists an algorithm which takes the following data on input: <\itemize> , >, ,\,\> and > as above; A formal solution \\> to =0> (determined by initial conditions in>); |\>*\,\,\>> above >, \> and \\>>. Setting ,\> >, the algorithm computes \\> with (z)-\|\\>. Moreover, setting log \>, this computation takes a time n)>. <\corollary> Singular holonomic constants in > admit n)>-approximation algorithms. The theorem in particular applies to the fast approximation of singular transition matrices from section . Indeed, let =\*z>*\> with \\>, \\> and \\> be one of the canonical solutions to at the origin. Then > may be accelero-summed by theorem and >*\> may be evaluated at points above > using fast exponentiation and logarithms. We thus obtain: <\corollary> There exists an algorithm which takes the following data on input: <\itemize> An operator \[z][\]>. A singular broken-line path >. A precision \\>>. The algorithm computes a matrix |~>> with entries in > and |~>-\>\<\|\|\>\\>. Moreover, setting log \>, the algorithm takes a time n)>.> We have summarized the complexities of efficient evaluation algorithms for holonomic functions in table below. In the rightmost column, the complexity bound for divergent series follows from corollary , when composing the transition matrix between zero and a point \\> close to with the non singular transition matrix from > to . <\big-table|||||||||\>>|>>|>|(n!)>>*z>>|>|>>|>f*z>>| n)>>| n*log log n)>>>|>f*(n!)>*z>>| n)>>| n)>>>>>>> Summary of the complexities of evaluation of different types of holonomic series. We assume that \\>> and that the > satisfy \|\K*\>> for certain \0>. For the series in the last row, we assume that ``evaluation'' is done using an appropriate accelero-summation scheme. For the rightmost column, we do not count the cost of the approximation of the constant itself. <\remark> Although we did not mention the complexity bound for entire series in , the complexity analysis of algorithm C is easily adapted to this case, since <\eqnarray*> ||log n\-1>M+log n)|2*log (n/2)>>>|||log log n\>log n\-1>M+log n)|2*log (n/2)>+O(M(n*log n*log log n))>>|||log log n\>log n\-1>M (n/2)>+O(M(n*log n*log log n))>>|||>=O(M(n)*log n*log log n).>>>> In particular, the computation of exponentials (and logarithms) using this (and Newton's) method is only a factor of less efficient as the best known algorithm based on Landen's transform . <\remark> In , we assumed that > is an algebraic number field ( a finite dimensional field extension of >) rather than the field > of all algebraic numbers over>. Of course, both point of views are equivalent, since given afinite number of algebraic numbers ,\,x\\>, there exists an algebraic number field > with ,\,x\\>. It is convenient to work a fixed algebraic number field > in order to have an algorithm for fast multiplication. For instance, given a basis ,\,x> of >, we may assume without loss of generality that <\equation> x*x=a*x+\+a*x,(a\\) after multiplication of the > by suitable integers. Then we represent elements of> as non-simplified fractions *x+\+p*x)/q>, where ,\,p\\> and \>>. In this representation, and using (), we see that two fractions of size can be multiplied in time . <\remark> In the case when > is a subfield of > which is not contained in the field> of algebraic numbers, the algorithms from this paper and still apply, except that the complexity bounds have to be adjusted. Let us make this more precise, by using the idea from for the computation of Taylor series coefficients of holonomic functions. We first observe that the efficient evaluation of holonomic functions essentially boils down to the efficient evaluation of matrix products <\equation*> M*\*M, where > is a matrix with entries in [k]> (in the regular singular case, one also has a finite number of exceptional values of for which > is explicitly given and with entries in >). Even if \\>, then we may still compute the matrix products <\equation*> M=M*\*M using dichotomy <\equation*> M+l>=M;l>*M> as polynomials in [k]> of degree . This requires a time , when working with a precision of digits. Assuming for simplicity that is a perfect square, and taking >, we next use an efficient evaluation algorithm for the substitution of ,m-l}> in>. This requires a time )*log (m))>. We finally compute <\equation*> M=M*\*M in time )>. Assuming that log m>, this yields an algorithm for the -digit evaluation of > of complexity *log m))>. In table, the complexities in the three different rows should therefore be replaced by )*)>, )*log n)> )*log n)>. Indeed, for the first two cases, we have . In the last case, we have the usual overhead. Notice that there is no need to distinguish between the columns. This last paper in a series on the efficient evaluation of holonomic functions deals with the most difficult case of limit computations in irregular singularities, where the formal solutions are generally divergent. We have not only shown how to compute such limits and so called singular transition matrices in terms of the equation and broken-line paths, but we have also shown that the resulting constants are comprised in the very special class> of complex numbers whose digits can be computed extremely fast. Since it is quite remarkable for a number to belong to >, an interesting question is whether there are any other ``interesting constants'' in > which cannot be obtained using the currently available systematic techniques: the resolution of implicit equations using Newton's method and the evaluation of holonomic functions, including their ``evaluation'' in singular points. Because of the emphasis in this paper on fast approximation algorithms, we have not yet investigated in detail the most efficient algorithms for obtaining approximations with limited precision. Indeed, given an initial operator \[z][\]> of order and degree in, ramification, the Borel transform and the multiplication with the acceleration kernel lead to vanishing operators of far larger (although polynomial) size )>. If only limited precision is required, one may prefer to use a naive )>-algorithm for computing the integral transforms, but which avoids the computation of large vanishing operators. In some cases, one may also use summation up to the least term, as sketched in the appendix below. In this paper, we have restricted ourselves to the very special context of holonomic functions, even though Écalle's accelero-summation process has a far larger scope. Of course, the results in our paper are easily generalized to the case of more general algebraically closed subfields> of >, except that we only get *log n)>-approximation algorithms. Nevertheless, following , it should also be possible to give algorithms for the accelero-summation of solutions to non-linear differential equations. Let \(z)[\]> and let be a solution to with a formal power series expansion =+*z+\>. It is well known that the truncated sum <\equation*> (sum )(z)=+\+*z up to the term*z> for which *z\|> is minimal usually provides an exponentially good approximation for. Even though such truncations do not allow for the computation of an arbitrarily good approximation of the value for fixed , it is well adapted to the situation in which only a limited precision is required. Indeed, in order to compute )(z)>, we may directly apply the binary splitting algorithm from . In this appendix, we will sketch how summation up to the least term can be made more precise using the accelero-summation process. We start from aformal solution =|~>+\+*log z\\> to . Given \>, we define <\equation*> (z)=(sum )(z)=i\l>n\N>()*z*log z. Our aim is to compute an explicit bound for (z)-(sum,\> )(z)> for a suitable non singular multi-direction >. Modulo a change of variables \*z>, we may take =\>. Consider a critical time >. If , then |~>> > is convergent at the origin, so we may compute a bound of the form <\equation> \|(-)(\)\|\B*C*\*N-1> on an interval ]> at the origin, using . For p>, we assume by induction that we have a bound <\equation> \|(-)(\)\|\B*C*(k*N)|\(k*N)>*\*N-1>+exp (\D*\|k-k>>) on a sector ]> at the origin and for sufficiently large N>. Using a second time, we may also compute bounds for the coefficients of > as a polynomial in >. At each critical time >, this leads to further bounds <\equation> \|(\)\|\B*(C)*\*N-1>*(k*N)|\(k*N)>, for \[c,\)>. Assuming that p>, we now have <\eqnarray*> -)(\)\|>|>|+I+I;>>|>||>>(\)*,k>(\,\)*\ \;>>|>||>((\)-(\))*,k>(\,\)*\ \;>>|>||>>(\)*,k>(\,\)*\ \.>>>> We may further decompose <\eqnarray*> >|>|+I+I;>>|>||*C*(k*N)|\(k*N)>>\*N-1>*,k>(\,\)*\ \>>|||*C*\*N-1>*(k*N)|\(k*N)>;>>|>||*C*(k*N)|\(k*N)>>>\*N-1>*,k>(\,\)*\ \;>>|>||*C*(k*N)|\(k*N)>*>exp (\D*\|k-k>>)*,k>(\,\)*\ \,>>>> if 1> and similarly with =0> if . By lemmas and , we may compute >, >, >, > and > with <\equation*> >>\*N-1>*,k>(\,\)*\ \\A*(k*N)|\(k*N)>*A*exp (\A*\|k-k>>), for \(0,c]> and N>. Using (), we thus get <\equation> I+I\A*A*(B*C+B*(C))*(k*N)|\(k*N)>*exp (\A*\|k-k>>). Using the techniques from section , we may also compute a bound <\equation*> \|(\)\|\A*\*\/k>>, for \[c,\)>. Using lemma and (), we may thus compute >, >, > and> with <\equation> I+I+I+I\A*A*(k*N)|\(k*N)>*exp (\A*\|k-k>>), for \(0,c]> and N>. Combining () and (), we may therefore compute > and > such that () recursively holds at stage . In the case when , similar computations yield constants , , , > and asmall sector =\,R>> with aperture \\/(2*k)>, such that <\equation> \|(g-f)(z)\|\B*C*\(k*N)*\|z\|+exp (\D*\|z\|1/k>). for all \> and all N>. The optimal > for which this bound is minimal satisfies <\equation*> N\k1>*(C*\|z\|)1/k>. We thus have <\equation*> \|(g-f)(z)\|\B*\(C*\|z\|)1/k>>, for some explicitly computable >. This completes the proof of the following: <\theorem> There exists an algorithm which takes on input <\itemize> A differential operator \(z)[\]> with an irregular singularity at ; The critical times > and non singular directions > with *\=k*\> for all , and which computes \0> and \\>, such that the bound <\equation*> \|(sum -sum,\> )(z)\|\B*C*\(k*N)*\|z\|+exp (\D*\|z\|1/k>) holds for any \*\,\,R>> and N>. In particular, for some computable constant > and precisions =\> with <\equation> n\(C*\|z\|)1/k>-n we may compute an >-approximation of ,\> )(z)> for \\\*\,\,R>> in time n)>, where the complexity bound is uniform in . The author would like to thank the first anonymous referee for several helpful comments and corrections. <\bibliography|bib|elsart-harv|all.bib> <\bib-list|36> Borel, E., 1928. Leçons sur les séries divergentes, 2nd Edition. Gauthiers Villards, reprinted by Jacques Gabay. Borodin, A., Moenck, R., 1974. Fast modular transforms. Journal of Computer and System Sciences 8, 366--386. Braaksma, B., 1991. Multisummability and Stokes multipliers of linear meromorphic differential equations. J. Diff. Eq 92, 45--75. Braaksma, B., 1992. Multisummability of formal power series solutions to nonlinear meromorphic differential equations. Ann. Inst. Fourier de Grenoble 42, 517--540. Brent, R., 1976a. The complexity of multiprecision arithmetic. In: Complexity of Computational problem solving. Brent, R., 1976b. Fast multiple-precision evaluation of elementary functions. Journal of the ACM 23(2), 242--251. al.(1993)Candelberger, Nosmas, and Pham>Candelberger, B., Nosmas, J., Pham, F., 1993. Approche de la résurgence. Actualités mathématiques. Hermann. Chudnovsky, D., Chudnovsky, G., 1990. Computer algebra in the service of mathematical physics and number theory (computers in mathematics, stanford, ca, 1986). In: Lect. Notes in Pure and Applied Math. Vol. 125. Dekker, New-York, pp. 109--232. Écalle, J., 1985. Les fonctions résurgentes I--III. Publ. Math. d'Orsay 1981 and 1985. Écalle, J., 1987. L'accélération des fonctions résurgentes (survol), unpublished manuscript. Écalle, J., 1992. Introduction aux fonctions analysables et preuve constructive de la conjecture de Dulac. Hermann, collection: Actualités mathématiques. Écalle, J., 1993. Six lectures on transseries, analysable functions and the constructive proof of Dulac's conjecture. In: Schlomiuk, D. (Ed.), Bifurcations and periodic orbits of vector fields. Kluwer, pp. 75--184. Gosper, R., Schroeppel, R., February 1972. Artificial intelligence memo. Tech. Rep. 239, MIT AI Laboratory, item 178 on the evaluation of functions, see . Haible, B., Papanikolaou, T., 1997. Fast multiple-precision evaluation of elementary functions. Tech. Rep. TI-7/97, Universität Darmstadt. Hendriks, P., Singer, M., 1999. Solving difference equations in finite terms. JSC 27(3), 239--259. Karatsuba, E., 1991. Fast evaluation of transcendental functions. Problems of Information Transmission 27, 339--360. Karatsuba, E., 1993. Fast evaluation of Bessel functions. Integral Transforms and Special Functions 1(4), 269--276. Karatsuba, E., 1995. Fast calculation of the Riemann zeta function (s)> for integer values of the argument . Problems of Information Transmission 31, 353--362. Karatsuba, E., 2000. On the computation of the Euler constant >. J. of Numerical Algorithms 24, 83--97. Martinet, J., Ramis, J.-P., 1991. Elementary acceleration and multisummability. Ann. Inst. Henri Poincaré 54(4), 331--401. Mitschi, C., 1996. Differential Galois groups of confluent generalized hypergeometric equations: an approach using stokes multipliers. Pac. J. Math. 176(2), 365--405. Moenck, R., Borodin, A., 1972. Fast modular transforms via division. In: Thirteenth annual IEEE symposium on switching and automata theory. Univ. Maryland, College Park, Md., pp. 90--96. Poincaré, H., 1886. Sur les intégrales irrégulières des équations linéaires. Acta Math. 8, 295--344. Ramis, J.-P., 1978. Dévissage gevrey. Astérisque 59/60, 173--204. Schönhage, A., Strassen, V., 1971. Schnelle Multiplikation grosser Zahlen. Computing 7 7, 281--292. Stanley, R., 1980. Differentially finite power series. European J. Combin. 1, 175--188, mR # 81m:05012. Thomann, J., 1995. Procédés formels et numériques de sommation de séries d'équations différentielles. Expositiones Math. 13, 223--246. der Hoeven(1997)>vander Hoeven, J., 1997. Automatic asymptotics. Ph.D. thesis, École polytechnique, France. der Hoeven(1999)>vander Hoeven, J., 1999. Fast evaluation of holonomic functions. TCS 210, 199--215. der Hoeven(2001a)>vander Hoeven, J., 2001a. Complex transseries solutions to algebraic differential equations. Tech. Rep. 2001-34, Univ. d'Orsay. der Hoeven(2001b)>vander Hoeven, J., 2001b. Fast evaluation of holonomic functions near and in singularities. JSC 31, 717--743. der Hoeven(2005a)>vander Hoeven, J., 2005a. Around the numeric-symbolic computation of differential Galois groups. Tech. Rep. 2005-4, Université Paris-Sud, Orsay, France, to appear in JSC. der Hoeven(2005b)>vander Hoeven, J., 2005b. Effective complex analysis. JSC 39, 433--449. der Hoeven et al.(2002)>vander Hoeven et al., J., 2002. Mmxlib: the standard library for Mathemagix. . der Put and Singer(2003)>vander Put, M., Singer, M., 2003. Galois Theory of Linear Differential Equations. Vol. 328 of Grundlehren der mathematischen Wissenschaften. Springer. van Hoeij, M., 1997. Formal solutions and factorization of differential operators with power series coefficients. JSC 24, 1--30. <\initial> <\collection> <\references> <\collection> |?>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |>>|30>> > |>>|30>> |>>|31>> > > > > > > > > > > > > > > > > > > > > > |->|-0.3em|>|0em||0em||>>al.(1993)Candelberger, Nosmas, and Pham|37>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |->|-0.3em|>|0em||0em||>>der Hoeven(2005b)|38>> > |->|-0.3em|>|0em||0em||>>der Hoeven(2005a)|38>> |->|-0.3em|>|0em||0em||>>der Hoeven(1999)|38>> > |->|-0.3em|>|0em||0em||>>der Hoeven et al.(2002)|38>> |->|-0.3em|>|0em||0em||>>der Hoeven(2001a)|38>> |->|-0.3em|>|0em||0em||>>der Hoeven(1997)|38>> > |->|-0.3em|>|0em||0em||>>der Hoeven(2001b)|38>> > |->|-0.3em|>|0em||0em||>>der Put and Singer(2003)|38>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> vdH:hol vdH:hol vdH:singhol vdH:singhol vdH:effan vdH:effan SS71 SS71 Br76a Br76a Br76b Br76b CC90 CC90 Kar91 Kar91 Kar93 Kar93 Kar95 Kar95 Kar00 Kar00 vdH:phd vdH:phd vdH:hol vdH:hol HP97 HP97 GS72 GS72 vdH:hol vdH:hol CC90 CC90 CC90 CC90 vdH:hol vdH:hol vdH:singhol vdH:singhol Kar93 Kar93 Kar95 Kar95 HP97 HP97 CC90 CC90 vdH:hol vdH:hol vdH:singhol vdH:singhol Ec87 Ec87 Ec92 Ec92 Ec93 Ec93 Br91 Br91 Bor28 Bor28 Ram78 Ram78 CC90 CC90 vdH:hol vdH:hol vdH:galois vdH:galois Th95 Th95 Stan80 Stan80 vdH:mml vdH:mml Ec85 Ec85 CNP93 CNP93 Ec87 Ec87 Ec92 Ec92 Ec93 Ec93 Br91 Br91 MR91 MR91 Ec85 Ec85 CNP93 CNP93 Ec92 Ec92 Ec93 Ec93 Stan80 Stan80 HS99 HS99 vdPS03 vdPS03 vdH:galois vdH:galois vdH:osc vdH:osc vH97 vH97 vdH:singhol vdH:singhol vdH:singhol vdH:singhol CC90 CC90 vdH:hol vdH:hol vdH:hol vdH:hol vdH:singhol vdH:singhol vdH:singhol vdH:singhol vdH:singhol vdH:singhol vdH:hol vdH:hol vdH:hol vdH:hol vdH:singhol vdH:singhol vdH:hol vdH:hol Br76b Br76b vdH:hol vdH:hol vdH:hol vdH:hol vdH:singhol vdH:singhol CC90 CC90 MB72 MB72 BM74 BM74 vdH:hol vdH:hol vdH:singhol vdH:singhol Ec87 Ec87 Br91 Br91 Br92 Br92 Poin1886 Poin1886 CC90 CC90 vdH:hol vdH:hol vdH:singhol vdH:singhol vdH:singhol vdH:singhol Mit96 <\associate|figure> <\tuple|normal> Illustrations of the contours for the acceleration and Laplace integrals. At the left, the contour for the direct integrals () and () using minors. In the middle, the contour in the case of majors () and (). At the right hand side, we use majors for integration at |0> and minors for the integration at infinity, as in () and (). > <\tuple|normal> Illustration of the Newton polygons at infinity and zero of the operator |L=\+2*z*\-z*\+(7*z-3*z)*\+11*z>. > <\tuple|normal> Illustration of the Newton polygons at infinity of |L> from figure |->|-0.3em|>|0em||0em||>> and |\ M=\+2*z+8*\+12*z-z+24*\+24*z-7*z+7*z+32*\+27*z-10*z+14*z+16>. > <\tuple|normal> Illustration of the Newton polygons at infinity of |L> from figure |->|-0.3em|>|0em||0em||>> and |\ L=16*\+16*z*\-4*z*\+14*z-6*z*\+11*z>. > <\tuple|normal> The left hand column shows the Newton polygons at infinity of the operators |L=\-*\->*\->> and |\ L=*\->*\->*\->>. At the right hand side, we have drawn the Newton polygons of their Borel transforms |\ L=1->*\+4->->*\+6->->*\+4->+>*\+1> and |\ \ L=\>->*\+->->*\++>*\++>->*\++>+>*\+> at infinity (the middle column) and at zero (the right hand column). > <\tuple|normal> The sector ||\>,\,R>>> and the associated constants |R>, |\>, |\> and |\>. > <\associate|table> <\tuple|normal> Summary of the complexities of evaluation of different types of holonomic series. We assume that |\\\>> and that the |f> satisfy |\|f\|\K*\>|> for certain |K,\\0>. For the series in the last row, we assume that ``evaluation'' is done using an appropriate accelero-summation scheme. For the rightmost column, we do not count the cost of the approximation of the constant |->|-0.3em|>|0em||0em||>>|z> itself. > <\associate|toc> |math-font-series||1Introduction> |.>>>>|> |Definitions |.>>>>|> > |Previous work |.>>>>|> > |Main results |.>>>>|> > |Quick overview |.>>>>|> > |Notations |.>>>>|> > |math-font-series||2Reminders on the accelero-summation process> |.>>>>|> |2.1The accelero-summation process |.>>>>|> > ||The Borel transform> |.>>>>|> > ||Accelerations> |.>>>>|> > ||The Laplace transform> |.>>>>|> > |2.2Majors and minors |.>>>>|> > |2.3Some elementary bounds |.>>>>|> > |2.4Explicit bounds for the acceleration kernels at infinity |.>>>>|> > |math-font-series||3Differential operators and Newton polygons> |.>>>>|> |3.1Definition of the Newton polygon |.>>>>|> > |3.2Operations on differential operators |.>>>>|> > |3.2.1Multiplicative conjugation |.>>>>|> > |3.2.2Compositional conjugation |.>>>>|> > |3.3The Borel transform |.>>>>|> > |3.4Formal solutions |.>>>>|> > |math-font-series||4Holonomy> |.>>>>|> |4.1Holonomic functions in several variables |.>>>>|> > |4.2Computation of vanishing operators |.>>>>|> > |4.2.1Addition |.>>>>|> > |4.2.2Multiplication |.>>>>|> > |4.2.3Differentiation |.>>>>|> > |4.2.4Ramification |.>>>>|> > |4.2.5Majors |.>>>>|> > |4.2.6Applications |.>>>>|> > |4.3Transition matrices |.>>>>|> > |4.3.1Classical transition matrices |.>>>>|> > |4.3.2Singular transition matrices |.>>>>|> > |4.3.3Transition matrices for the multivariate case |.>>>>|> > |4.4Holonomic constants |.>>>>|> > |math-font-series||5Bounds for the transition matrices> |.>>>>|> |5.1Integral formula for transition matrices |.>>>>|> > |5.2Majorants |.>>>>|> > |5.3Bounds for the transition matrices |.>>>>|> > |math-font-series||6Effective integral transforms> |.>>>>|> |6.1Uniformly fast approximation of transition matrices |.>>>>|> > |6.2Fast approximation of integral transforms |.>>>>|> > |math-font-series||7Effective accelero-summation> |.>>>>|> |7.1Setting up the framework |.>>>>|> > |Normalization |.>>>>|> > |Critical times |.>>>>|> > |Equations for |> and |>> |.>>>>|> > |Singular directions |.>>>>|> > |Growth rates of |> and |>> |.>>>>|> > |The acceleration kernels |.>>>>|> > |Equations for the integrands |.>>>>|> > |7.2Calibration |.>>>>|> > |Computing |a> |.>>>>|> > |Computing |\>, |a> and |u> |.>>>>|> > |Computing |\> and |\> |.>>>>|> > |7.3Approximation of |f(z)> |.>>>>|> > |7.4Main results |.>>>>|> > |math-font-series||8Conclusion> |.>>>>|> |math-font-series||Appendix ASummation up to the least term> |.>>>>|> |Acknowledgment |.>>>>|> > |math-font-series||References> |.>>>>|>