> <\body> ||<\author-address> CNRS, Département de Mathématiques Bâtiment 425 Université Paris-Sud 91405 Orsay Cedex France ||>|<\doc-note> This work has partially been supported by the ANR Gecko project. |||>> <\abstract> Consider a power series \[[z]]>, which is obtained by a precise mathematical construction. For instance, might be the solution to some differential or functional initial value problem or the diagonal of the solution to a partial differential equation. In cases when no suitable method is beforehand for determining the asymptotics of the coefficients >, but when many such coefficients can be computed with high accuracy, it would be useful if a plausible asymptotic expansion for > could be guessed automatically. In this paper, we will present a general scheme for the design of such ``asymptotic extrapolation algorithms''. Roughly speaking, using discrete differentiation and techniques from automatic asymptotics, we strip off the terms of the asymptotic expansion one by one. The knowledge of more terms of the asymptotic expansion will then allow us to approximate the coefficients in the expansion with high accuracy. Consider an infinite sequence ,f,\> of real numbers. If ,f,\> are the coefficients of aformal power series \[[z]]>, then it is well-known that a lot of information about the behaviour of near its dominant singularity can be obtained from the asymptotic behaviour of the sequence ,f,\>. However, if is the solution to some complicated equation, then it can be hard to compute the asymptotic behaviour using formal methods. On the other hand, the coefficients ,f,\> of such a solution can often be computed numerically up to a high order. This raises the question of how to the asymptotic behaviour of ,f,\>, based on this numerical evidence. Assume for instance that we have fixed a class > of ``admissible asymptotic extrapolations'', such as all expressions of the form <\equation> \=c+\+|n>>*n>*\*n>>. Given the first coefficients ,\,f> of , the problem of is to find a ``simple'' expression \\>, which approximates > well in the sense that the <\equation> \,\>=max{L,\,N}>-\\||\|f\|+\|\\|> is small. Here N> is a suitably chosen number, such as . In general, we are just interested in a good extrapolation formula, and not necessarily in the best one in the class>. It is also important that the extrapolation continues to provide good approximations for N>, even though we have no direct means of verification. A good measure for the complexity of an expression \\> is its number >> of . For instance, taking > as in the formula(), the continuous parameters are ,\,c>, >, > and >, whence >=k+4>. Another possible measure is the >> of > as an expression. For our sample expression > from(), this yields >=9+6*k>, when counting for each of the operations >, >>, , ^, , as well as for and each of the constants. Since the expression size >> depends on the way operations are encoded, the measure >> should generally be preferred. Notice also that one usually has >\C*\>> for a fixed constant . In fact, the systematic integration of ``guessing tools'' into symbolic computation packages would be a useful thing. Indeed, current systems can be quite good at all kinds of formal manipulations. However, in the daily practice of scientific discovery, it would be helpful if these systems could also detect hidden properties, which may not be directly apparent or expected, and whose validity generally depends on heuristics. Some well-known guessing tools in this direction are the LLL-algorithm and algorithms for the computation of Padé-Hermite forms , with an implementation in the package. Padé-Hermite forms are used by in order to guess closed form formulas for sequences of which only a finite number of coefficients are known. In the area of numerical analysis, several algorithms have been developed for accelerating the convergence of sequences, with applications to \ asymptotic extrapolation. One of the best available tools is the E-algorithm, which can be used when > consists of linear expressions of the form <\equation> \=c*\+\+c*\, with continuous parameters ,\,c>, and where \\\\> are general functions, possibly subject to some growth conditions at infinity. Here we use the standard notations g\f=\(g)>, g\f=O(g)> and g\f\g\f>. Notice that the E-algorithm can sometimes be used indirectly: if =1> in(), then <\equation*> log \=\*n+\*log n+c+\+|n>+\, with =log c>, =c/c>, , is asymptotically of the required form(). We are aware of little work beyond the identification of parameters which occur linearly in a known expression. A so called ``Gevreytiseur'' is under development for recognizing expansions of Gevrey type and with an asymptotic behaviour <\equation> f=C*n>*\*n>*(n!)>+\. One of the referees also made us aware of work on extrapolations of the form <\equation*> \=c*n>+\+c*n>,(\\\\\) where, in addition to the coefficients ,\,c>, some of the exponents > may be unknown (even though guesses often exist, in which case one only has to determine whether the corresponding coefficient > vanishes). In this paper, we will be concerned with the case when > is the set of . An exp-log expression is constructed from the rational numbers, real parameters and using field operations, exponentiation and logarithm. It was already pointed out by Hardy that the asymptotic behaviour of many functions occurring in analysis, combinatorics and number theory can be expressed in asymptotic scales formed by exp-log functions. More generally, the field of transseries (which can be seen as the ``exp-log completion'' of the field of germs of exp-log functions at infinity) enjoys remarkable closure properties. The main problem with naive approaches for asymptotic extrapolation is that, even when we have a good algorithm for the determination of the dominant term > of the expansion of >, it makes little sense to recurse the algorithm for the difference -\>. Indeed, a small error in the determination of , >, > and > for an expansion of type() will result in a difference -\> which is either asymptotic to > or>. Even if we computed ,\,f> with great accuracy, this will be of little help with current extrapolation techniques. Without further knowledge about the remainder =f-C*n>*\*n>*(n!)>>, it is also hard to obtain approximations for , >, >, > with relative errors less than /f>, no matter which classical or clever numerical algorithm is used. Instead of directly searching the dominant term > of the asymptotic expansion of and subtracting it from , a better idea<\footnote> A variant of the idea is to kill leading subdominant terms ( Aitken's method), after which the limit of the sequence can be read off immediately with high accuracy. is to search for a transformation \(f)> which kills the dominant term, and recursively apply asymptotic extrapolation to the transformed sequence (f)>. For instance, if =c+c*n1>+\>, then we may take <\equation*> f=\(f)=\(f)=f-f=-|n>+\ as our first transformation and \\(n*f)> for the second transformation. Similarly, if =c*\*n+\>>, then we may take <\equation*> f=\(f)=Q(f)=log|f>=c+\ and proceed with the transformation >. After a finite number of transformations <\equation*> f=(\\\\\)(f), the resulting sequence > looses its accuracy, and we brutally extrapolate it by zero <\equation*> f\=0. Following the transformations in the opposite way, we next solve the equations <\equation> \()= for ,1>. The resolution of() involves the numeric determination of the continuous parameter > (or parameters ,\,c>>, in general) which was killed by >. For decreasing ,1> the accuracy of the so-computed parameters > and extrapolations \f> tends to increase. The end-result > is the desired extrapolation of . The above scheme is actually a ``meta-algorithm'' in the sense that we have not yet specified the precise transformations which are applied. In section, we will discuss in more detail some of the ``atomic'' transformations which can be used. Each atomic transformation comes with a numeric criterion whether it makes sense to apply it. For instance, the finite difference operator > should be used on sequences which are not far from being constant. In section, we next describe our main asymptotic extrapolation scheme, based on the prior choice of a finite number of atomic transformations. , our scheme either returns the extrapolation in a numeric form or as a sequence of inverses of atomic transformations. However, such extrapolations are not necessarily readable from a human point of view. In, it is shown how to solve the equations() in the field of formal transseries. Finite truncations of these transseries then yield exp-log extrapolations for , as will be detailed in section. Once the shape of a good exp-log extrapolation is known, the continuous parameters may be refined using iterative improvements. The new extrapolation scheme has been implemented (but not yet documented) in the system. A more manual version of the scheme has been used extensively in and produced satisfactory extrapolations in this context. In section, we will present some examples on which we have tested our implementation. Of course, problems occur when the atomic transformations and corresponding numeric criteria have not been chosen with care. In section, we examine such problems and possible remedies in more detail. In the last section, we outline some possible generalizations of the scheme. Combining both and classical techniques, such as the E-algorithm, experts will be able to obtain most of the asymptotic extrapolations considered in this paper by hand. Indeed, the right transformations to be applied at each stage are usually clear on practical examples. Nevertheless, we believe that a more systematic and automatic approach is awelcome contribution. Despite several difficulties which remain to be settled, our current scheme manages to correctly extrapolate sequences with various forms of asymptotic behaviour. Our scheme also raises some intriguing problems about appropriate sets of atomic transformations and the approximation of sequences with regular behaviour at infinity by linear combinations of other sequences of the same kind. <\remark> The present article is a completely reworked version of. We adopted a more systematic exposition, centered around the notion of atomic transformations. We also optimized our propositions for ``good sets'' of atomic transformations ( the recipes at the start of section and the discussion about possible improvements in section) and implemented the exp-log extrapolation algorithm. As explained in the introduction, our main approach for finding an asymptotic extrapolation for > is through the application of a finite number of > on the sequence >. Each time that we expect > to satisfy a specific kind of asymptotic behaviour, we apply the corresponding atomic transformation. For instance, if we expect> to tend to a constant, which corresponds to an asymptotic behaviour of the form =c+R> with\1>, then we may apply the finite difference operator >. More generally, we are thus confronted to the following subproblem: given a suspected asymptotic behaviour <\equation> f=\,\,c>(R), where ,\,c> are continuous parameters and > is a remainder sequence, we should <\enumerate> Find a numeric criterion for deciding whether > potentially admits an asymptotic behaviour of the form(), or very likely does not have such a behaviour. Find a sequence transformation (f)>, such that only depends on >, but not on the continuous parameters ,\,c>. Find a way to numerically solve the equation ()=>, when a numerical extrapolation \g> for is known. In this section, we will solve this subproblem for several basic types of asymptotic behaviour(). In the next section, we will show how to combine the corresponding atomic transformations into a full asymptotic extrapolation scheme. If a sequence> satisfies the numeric criterion associated to the atomic transformation >, then we say that > is for >. When > tends to a finite limit , we have <\equation> f=c+R(R\1). A simple numerical criterion for this situation is <\equation> \>\\ for a suitable threshold \0>. The continuous parameter is killed by the finite difference operator >: <\equation*> g=\(f)=f-f=R-R. Given an asymptotic extrapolation \g> for , we may extrapolate by <\eqnarray*> >|>|i\n>g>>|||-i\N>g>>>> In compact form, we thus have =\1>;N>()>, where 1>;N>> is the inverse of > for which the -th coefficient is given by >. Assume that we expect > to be of the form <\equation> f=R*\(R\1), where > is a simple explicit function, such as =n>, =n1>> or =n2>>. Asimple numeric criterion for this situation is <\equation> \|\>*\>\\> for a suitable threshold >\0>. We usually take >\\>. More generally, >> is taken smaller for more complicated functions. For instance, one might take =0.1>, but >=0.025>. Whenever > is of the form(), it is natural to apply the scaling operator >>:f\f/\> to. Given an extrapolation > for =f/\>, we simply scale back \\*> in order to obtain the extrapolation for . In cases when > has a regular asymptotic behaviour, which does not fall into any special case, then we may write <\equation*> f=\*\>(R\1). The only necessary condition for this to work is that the sign of > ultimately does not change. The corresponding numeric criterion can simply taken to be <\equation*> |f>\0(n=L,\,N), with the optional exclusion of the other cases() and(). If the criterion is satisfied, then we may apply the transformation Log f=log(f/sign(f))>, whose inverse is given by ;N>=sign(f)*exp>. As a variant, we may consider an asymptotic behaviour of the form <\equation*> f=c*\>, and use the atomic transformation \Log:f\log(f/f)=R-R>. Sometimes, it is useful to combine the above basic atomic combinations in order to detect more specific types of asymptotic behaviour. For instance,if <\equation> f=c*\+\+c*\+R*(\\\\\\R), then we may successively apply the operators <\equation*> \=\\\(\)1>> for ,k>, and where =Id>. This is equivalent to the E-algorithm and related to the ``all-in-one'' transformation given by <\equation*> f\>|>|>|>>|>|>|>|>>|>|>||>>|>|>|>|>>>>>/||>|>|>|>|>|>>|>|>||>>|>|>|>|>>>>>. Of course, \\> is acceptable for if and only if > and > are acceptable for (f)>. Another example is given by asymptotic behaviour of the type <\equation> f=c*n>+R(R\1). In that case, we may apply the transformation \\\Q>. More generally, the logarithmic/exponential transformations can be rather violent, so it is often useful to restrict their acceptability. If we are confronted to any other kind of asymptotic behaviour(), which does not fall in the above cases, then we may investigate systematic ways to synthesize anappropriate atomic transformation >, such that the dominant asymptotic behaviour of (f)> depends on >. The main trouble comes from the fact that we need a transformation of the form \(f,\,\ f)>. Indeed, consider the modified problem, when is an infinitely differentiable function on rather than a sequence. For simplicity, we will also assume that ,\,c>(R)=\,\,c>+R>, where > is an exp-log function in ,\,c> and . Since the set of algebraically differential functions forms a field which is closed under composition, > satisfies an algebraic differential equation ,\,\)=0> in and we simply take P(f,\,f)> for our atomic transformation. If is sufficiently small, then the intermediate value theorem for transseries ensures the existence of atransseries solution to ,f)=g>, which can actually be given an analytic meaning. Unfortunately, the field of functions which satisfy an algebraic difference equation is not closed under composition. In particular, let us consider the sequence >, where is a formal parameter. We claim that this sequence does not satisfy an algebraic difference equation. Indeed, assume the contrary and let <\equation*> P(n,\,(n+s))=0 be arelation of minimal order and minimal degree. Considering as a complex number, turning around the singularity at s> results in a new relation <\equation*> P(n,\,(n+s-1),\*\*c>*(n+s))=0 Combining this relation with the previous one, we obtain a relation of smaller degree or order. In particular, this shows that for an asymptotic behaviour of the form(), there exists no algebraic atomic transformation > which will completely eliminate *n>> and make > apparent in the dominant behaviour of (f)>. Throughout this section we assume that we have fixed a finite set > of atomic transformations. One possible choice for > is ,Q,\1>>,\,\>}>, but other selections will be discussed below. Each transformation \\> comes with a criterion for acceptability. Furthermore, given an asymptotic extrapolation > for (f)>, we assume that we may compute the inverse operator 1>>=\1>>+1>,\,f;N>> of >, for which =\1>>()> coincides with on the last >> known coefficients >+1>,\,f>. Given an input series and an ``extrapolation order'' , the asymptotic extrapolation scheme simply attempts to apply all composite transformations \\\\> on , where ,\,\\\> and l>. The scheme returns a set of possible extrapolations. Each asymptotic extrapolation is returned in the form of a =\>1>\\\\>1>>, for suitable parameters ,\,\>. When applying this recipe to the zero sequence, we obtain a numeric extrapolation =\(0)> of . In particular, the resulting extrapolations can optionally be sorted on the relative error >>, after which we may keep only the best or best results. The inverse \\\\> of a recipe will be called a . <\algorithm|extrapolate> <\with|mode|math> (f,l)> a finite sequence ,\,f> and the extrapolation order a set of extrapolations of the form ,\)> for <\body> Let {Id}> For each \\> such that > is acceptable for and >\l> do <\indent> \extrapolate(\(f),l-r>)> For each \E> do <\indent> Compute parameters > with ==(\1>>\\)(0)> for >+1,\,N> Set E\{\1>>\\}> Return <\remark> In practice, ={\,\,\}> is often designed in such a way that no more than one transformation > is acceptable at a time. For instance, the criterion for acceptability by > might include the condition that > is not acceptable for any i>. Indeed, it is important to keep the number of accepted transformations small at each stage, in order to prevent combinatorial explosion. For some additional considerations, we refer to section. <\remark> In order to increase control over the accuracy, it is possible to perform all computations using interval arithmetic. In that case, the coefficients ,\,f> are intervals instead of floating point numbers, and the deterioration of the relative precision will be more explicit during the application of the successive transformations. As to the actual extrapolation, one should replace by the smallest interval which contains ,\,f> at the last step, and similarly for the determination of the other continuous parameters. Again, this has the advantage of giving a more precise idea on the accuracy of the continuous parameters. We refer to section3 of our earlier preprint for a worked example using interval arithmetic. Clearly, the choice of > and the corresponding criteria is primordial for the success of our extrapolation scheme under various circumstances. Reduced sets > of transformations and strict criteria are recommended in the case when the user already has some idea about possible shapes for the asymptotic expansion. Larger sets > and looser criteria may allow the detection of more complex asymptotic behaviours, at the risk of finding incorrect extrapolations. Let us discuss a few other possible choices for > in more detail. Assume that the sequence > admits an asymptotic expansion in the form of a power series <\equation> f\c+|n>+|n>+\ Applying the transformation >>\\> on this expansion, we obtain <\equation*> (>>\\ f)\\c+-2*c|n>++3*c-3*c|n>+\, which has a similar shape. Assuming that none of the coefficients > vanishes, the set ={>>\\}> or ={\,>>}> therefore suffices for the extrapolation of . In the case when some of the coefficients vanish, we should rather take ={\,>>,\,>>}>. Assume now that has an asymptotic expansion <\equation*> f\>>c*|n>. If \*N\> for a fixed constant \\1>, then the relative error >> tends to zero for \> and fixed . In particular, if \\>, then the asymptotic extrapolation scheme will attempt to apply > on any sequence > with \(log n)>. Since <\equation*> n*\(log n)=j*(log n)+\, it follows that the set ={\,>,\,>>}> usually suffices for the asymptotic extrapolation of (assuming that is sufficiently large). In the previous section, we have seen that the operator \\\Q> allows us to detect asymptotic behaviours of the form (). More generally, assume that> admits a generalized power series expansion with unknown (real) exponents <\equation*> f\|n>>+|n>>+|n>>+\(c\c\c\\). Then the set ={\,\\\\Q,>,>>}> will suffice for the asymptotic extrapolation of . It often occurs that the asymptotic expansion of is the product of a complicated ``transmonomial'' and a simple power series expansion, as in(). In that case, we may slightly modify our extrapolation scheme in order to allow for a larger set ={\,Q,\1>>,\,\>}> for the first few steps and a reduced set ={\,>>,\,>>}> for later steps. Asymptotic extrapolations of , as computed in the previous section, are given either numerically, or using a recipe in terms of the inverses of the operators in >. For instance, in the case of a power series expansion(), the computed recipe is typically of the form <\equation> \1>;N>\2>>>\\\\1>;N+1-k>\2>>>. Unfortunately, the corresponding values of ,\,c> can not directly be read off from this recipe. In this section, we will discuss techniques for obtaining asymptotic extrapolations in a symbolic and more human readable form. <\remark> Of course, ``human readability'' is a subjective notion. In a sense, recipes of the form() are already symbolic expressions for the asymptotics. <\remark> Sometimes, asymptotic extrapolation is used for the accurate computation of some parameters, such as the radius of convergence, rather than a complete asymptotic expansion. In such cases, symbolic post-treatment is not necessary and may even deteriorate the accuracy of the extrapolation. A convenient and quite general setting for the computation with asymptotic expansions is to restrict our attention to exp-log functions or transseries. An is constructed from the constants and an infinitely large indeterminate using the field operations, exponentiation and logarithm. Several algorithms exist for the computation with exp-log functions at infinity. A is formed in a similar way, except that, under certain support conditions, we allow additions to be replaced by infinite summations. An example of a transseries is <\equation*> 1!*n1>*\+ n>+\>+2!*n2>*\+ n>+\>+3!*n3>*\+ n>+\>+\. The field of formal transseries at infinity is stable under many operations, such as differentiation, integration, composition and functional inversion. However, transseries are formal objects and often divergent. Techniques for the computation with transseries can be found in. The aim of this paper is not to go into details about the computational aspects of transseries. A heuristic approach, which is particularly well adapted to the present context, is described in. From the computational point of view, we recall that atransseries is approximated by an infinite sum <\equation> f=>f, where each > is a linear combination of transmonomials. Each transmonomial is either an iterated logarithm or the exponential of another transseries. Furthermore, it is described in how to perform common operations on transseries, such as differentiation, integration, composition, as well as discrete summation 1>>. Finally, we notice that the technique of summation up to the least term can be used recursively for the approximate numeric evaluation of(), even if is divergent. Whenever > extrapolates f>, this allows us in particular to compute 1>;N>()=\1>()+a>, with -\1>()(N)>. Returning to our asymptotic extrapolation scheme, it follows that the inverses of the atomic transformations from section can systematically be applied to transseries instead of numeric sequences. In other words, given the recipe > of an asymptotic extrapolation, we simply compute (0)> as a transseries. If we want an exp-log extrapolation, then we truncate f+\+f> for the order which yields the best numeric extrapolation. Of course, the truncations are done recursively, for all transseries whose exponentials occur as transmonomials in the expansions of ,\,f>. Several techniques can be used to enhance the quality of the returned exp-log extrapolations. Indeed, due to truncation and the process of summation up to the least term, the accuracies of the numeric parameters in the exp-log extrapolation may be deteriorated with respect to those in the original recipe. Let ,\,a>> be an exp-log extrapolation depending on parameters ,\,a>. One way to enhance the quality of our extrapolation is by iterative improvement. For instance, we may use Newton's method for searching the zeros of the function <\equation*> >>|>>|>>>>>\,\,a>(N-k+1)-f>>|>|,\,a>(N)-f>>>>>. Alternatively, we may perform a least square fit on the range ,N}>. Another approach to enhancement is to search closed form expressions for the parameters ,\,a>, such as integers, rational numbers or simple linear combinations of known constants >, >, etc. Such closed form expressions can be guessed by continuous fraction expansion or using the LLL-algorithm . In cases of success, it may be worthy to adjust the set of atomic transformations and rerun the extrapolation scheme. For instance, if we find an extrapolation such as <\equation*> f\1.5904324561*n+\, then we may add 3/2>>>> as an atomic transformation. This spares the determination of one continuous parameter, which increases the accuracy of the successive transformed sequences. In this section, we will present the results of running our scheme on a series of explicit examples, as well as an example from suggested by one of the referees. We have tried several sets of atomic transformations >, using various criteria and thresholds. In what follows, we systematically take N/2\>, and the precise algorithm works as follows: <\enumerate> If \0.25>, *Log f,1>\0.05> and > n>\\\\\\\Log)(f)\|\0.5> (where norms are taken on ,N}>), we apply the transformation =\\\\\\Log> in order to detect expansions of the form c*n+\>. Moreover, if the last coefficient of the sequence >\\\Log)(f)> is close to an integer p\\|\N1/2>>, then we rather apply \p>>>> If step 1 does not succeed and >\0.25> for some {\2,\1,0,1}>, then we apply the transformation =\\p>>>>. If steps 1 and 2 do not succeed and does not change sign on ,N}>, then we apply \Log>. In other words, we first check whether has an expansion of the form c*n+\>, with a high degree of certainty. If not, then we still try for 2,\1,0,1}> with a bit less certainty. In cases of panic, we fall back on the transformation . We want to emphasize that the above choices are preliminary. We are still investigating better choices for and the thresholds, and more particular transformations for the detection of common expansion patterns. For our first series of examples, we took and considered several explicit sequences. The results have been summarized in table below. For all examples we used a precision of 512 bits, except for >, in which case we used bits. The first sequence >=exp(n1>)> is an ordinary power series in >, which is well recognized (3 decimal digits per iteration, as expected). Notice the remarkable presence of a >-transformation. The second sequence=n>+n+1> demonstrates the ability to detect expansions in with arbitrary exponents. The third sequence =n!> reduces to an ordinary power series expansion after the transformation \\\Q> and is therefore well recognized. The fourth sequence =\(n)> shows that we are also able to detect expansions in n>> with arbitrary exponents. The next two sequences >> and > essentially concern the detection of expansions of the form (a*log n+b)*nk>>. This works well for a few iterations, but the presence of tends to slow down the convergence and becomes problematic at higher orders. Finally, we considered an expansion in 1>>, which should be hopeless for our algorithm due to the slow increase of . <\big-table||||||||>|>=\>>>|>|\\\\\\>>|1034>>>>|=n>+n+1>>|>|\\\\\\>>|1012>>>>|=n!>>|>|\\\\\Q>>|1021>>>>|=\(n)>>||\Q)\\>>|10350>>>>|>=n+log(n+1)>>||\\\\\\1>>>|109>>>>|=n*\>>>||\\\\\\\\Q>>|1011>>>>|=>>||\Q\\\\>>|109>>>>>>>> Results when applying our scheme to several explicit sequences for and order . In the table, we have shown the order of the best reductor and the corresponding relative error. Having obtained the recipes for each of the sequences, we next applied the post-treatment of section in order to find asymptotic expansions in the more readable exp-log form. The results are shown below: <\eqnarray*> |>>|>|1.000000000++>+>+>|||>+>+>+\>>|>|>|-0.000001308*n+0.0451432651*n+>>|||-0.903748123*n+>+\>>|>|>|n-1.0000000000*n+0.5000000000*logn>+>>|||n-1.0000000000*n+0.5000000000*logn>|n>+\>>|>|>|>+>+>>|||>->+>+\>>|>>|>|n+0.9999911895*n+1.002816158*logn->>|||n|n>-+\>>|>|>|+1.003963496*logn>+>>|||+1.003963496*logn>|n>->>|||+1.003963496*logn>|n>+\>>|>|>|n-1.677515325*logn>|logn>+>>|||n-1.677515325*logn>|logn>+>>|||n-1.677515325*logn>|logn>+\>>>> The obtained expansions are remarkably accurate in the cases of >>, > and >. We were particularly happy to recover Stirling's formula in a completely automatic numeric-symbolic manner. The expansions for >, >> and > are also close to the correct ones, when neglecting terms with small coefficients. The last expansion is more problematic; errors typically occur when a numerically small expression on ,N}> is incorrectly assumed to be infinitesimal. Such mistakes may lead to the computation of absurdly complicated transseries expansions, which occasionally fail to terminate. For instance, we only obtained exp-log expansions for extrapolations at order in the last two examples. Let us now consider the sequence >, where > is the number of self-avoiding walks of size on a triangular lattice. The terms ,\,f> with are as follows: <\equation*> >|>|>|>|>|>|>|>|>>>> It is theoretically known that <\equation> f\\*n>*(a+\) and the authors of actually expect an expansion of the type <\equation> f\\*n*a+|n>+|n>+|n>+|n>+|n>+\. In view of (), we have applied our asymptotic extrapolation algorithm on , by forcing the transformation 11/32>>>> at the start. This yields the expansion <\equation*> |+|n>-|n>+>>||n>-|n>+|n>+\>>>>> which is quite close to the expected expansion. The corresponding reductor is given by <\equation*> \\Q\\\\\\1>\Log\11/32>>> As explained in , the detection of the exponent 3/2> for > is somewhat critical due to the possible appearance of an additional term *n59/32>=*n1.84375>>. The authors argue that it is difficult to confirm or refute the presence of such a term, based on numerical evidence up to order only. We agree with this analysis. Nevertheless, it is striking that the exponent returned by our algorithm is not far from the mean value 1.671875> of1.5> and 1.84375>. Also, the algorithm failed to detect a clean >-transformation, which might indicate the presence of two terms with close exponents. There is still significant room for improving our choice of > in section, and the corresponding criteria for acceptability. Failures to detect the expected form of asymptotic expansion can be due to the following main reasons: <\enumerate> Even for \>, the set > of transforms does not allow for the detection of the correct asymptotic expansion. At a certain step of the extrapolation process, the wrong atomic transformation is applied. A recipe is found for the asymptotic extrapolation, but the corresponding exp-log extrapolation is incorrect or too hard to compute. In this section, we will discuss the different types of failure in more detail and examine what can be done about them. Of course, the class of asymptotic expansions which can be guessed by our extrapolation scheme is closely related to the choice of >. Asymptotic behaviour for which no adequate atomic transformation is available will therefore not be detected. For instance, none of the transformations discussed so far can be used for the detection of oscillatory sequences such as =sin(\*n)>. A more subtle sequence which cannot be detected using ={\,Q,\1>>,\,\>}> is =(log n)1>>. In order to detect =(log n)1>>, we might add the transformation >>. However, the corresponding criterion should be designed with care: as noticed in section, sequences of the form >> are easily mistaken for constant sequences, when taking \*N/2\> for a fixed \(0,1)>. A more robust criterion might require \\>, f1>,1>\\> as well as (f*log n),1>\\> for a suitable threshold >. Another more intricate difficulty occurs when the various transformations introduce an infinite number of parasite terms. For instance, consider the expansion <\equation> f\\+|n>+\n>+\. When applying one >-transformation, we obtain <\equation> (\ f)\b|n>+>->+\+1->*\n>+\. The extrapolation of() therefore leads to an infinite number of >-transformations, in order to clear the infinity of terms which were introduced by the first >-transformation. In particular, the presence of the n>> will not be detected. The phenomenon aggravates when parasite logarithmic terms are introduced. For instance, if we start the extrapolation of <\equation*> f\\*n>*c+|n>+|n>+\ with the transformation >>\\1>\Log> instead of , then we are lead to the extrapolation of an expansion of the form <\equation*> (>>\\1>\Log f)\\*log n+*log n+\|n>+*log n+\|n>+\. We have seen in the previous section (examples >> and >) that such expansions are much harder to detect than ordinary power series expansions. This is why we often prefer the-transformation over . The real reason behind the above difficulty is that ``nice'' expansions in exp-log form, such as (), do not coincide with ``nice'' expansions constructed using the inverses of operators in >. For instance, the expansion <\equation*> f\\->->+>->+>->+>->+\+\n> is ``nice'' in the second sense, since <\eqnarray*> f>|>|>+1->*\n>>>| \ f>|>|>-1*n*\n>+\>>>> In order to detect ``nice'' exp-log expansions, we have to adapt the set > so as to kill several continuous parameters at a time. In the case of linear combinations (), this can be done using the transformations of the E-algorithm, modulo prior determination of the functions>. In general, the required transformations cease to be simple; see the discussion at the end of section. One idea might be to use the relation <\equation*> \=log(1+\), in combination with summation up to the least term, in order to replace the finite difference operator by a genuine derivation. One particular strength of our asymptotic extrapolation scheme is that it is strongly guided by the asymptotic behaviour of the original sequence and its successive transformations. If the asymptotic expansion of the sequence is given by atransseries, then this property is preserved during the extrapolation process. Theoretically, it is therefore possible to always select the right transformation at each stage. In practice however, the asymptotic regime is not necessarily reached. Furthermore, numerical tests on the dominant asymptotic behaviour of a sequence have to be designed with care. If bad luck or bad design result in the selection of the wrong transformation at acertain stage, then the obtained extrapolation will be incorrect from the asymptotic view. Nevertheless, its numeric relative error on the range ,N}> may still be very small, which can make it hard to detect that something went wrong. Let us first consider the situation in which the asymptotic regime is not necessarily reached. This is typically the case for expansions such as <\equation> f\1-+\. For 1000>, our scheme will incorrectly assume that \1000*n1>>. The sequence >> from section leads to a similar situation: <\eqnarray*> | \ \ \1> f>)>|>|n-6-n|n>++n|n>-\>>| \) \ \1> f>)>|>|7*logn+62+n|n>--n|n>+\>>>> This explains our problems to detect the right expansion, even for large values of such as \\403.429> or \\8103.08>. The best remedy is to add an atomic transformation which detects the two leading terms together. For instance, we may use the transformation \\> in() and \\> in() and(). Of course, in order to gain something, we also need to adjust the acceptability criteria. For instance, in the cases of() and(), we may simply require that f,1>\\> for a suitable threshold >, instead of the combined requirements \\> (or \\>) and f,1>\\>. Of course, a drawback of our remedy is that there are many possibilities for the subdominant term(s) to be killed. Consequently, we will only be able to correct cancellations on or near the range ,N}> in a few special cases which occur frequently in practice. For the time being, it seems that polynomials in or 1>> deserve special treatment, but more experimentation is needed in order to complete this list. Another major problem is the determination of a suitable value for and good thresholds > for tests \\> on the relative error. Let us first examine some frequent mistakes, due to inadequate thresholds. As we have seen before, our scheme frequently assumes sequences of the form > to be constant on the range ,N}>. Fortunately, this is arather harmless mistake, because we usually want to apply > both for and ,k\\>. A more annoying mistake is to consider >> as a constant for small \0>. This typically happens when > is too large. Conversely, if > is too small, then we may fail to recognize that =a+b*(log n)1>\1>. An interesting experiment for quantifying the risk of mistakes is to examine how well basic functions, such as >>, , *n>>, are approximated by other basic functions on the range N}>. More precisely, given sequences > and ,\,g>, we solve the system <\equation*> >>|>|>>>|>||>>|>>|>|>>>>>>*>>|>>|>>>>>=>>>|>>|>>>>>> for \\\n=N> and consider the relative error <\equation*> \,\,g>=\*g+\+\*g> of the obtained approximation of by *g+\+\*g>. Some results are shown in table, where we took the > equally spaced. The table shows a significant risk of confusions for N/2\> and the quasi-absence of confusions for \>. Unfortunately, when taking too small, we often fail to be in the asymptotic regime and we will either need to increase the thresholds > or improve the detection of several terms in the expansion at once. We may also start check the criteria several times for increasing values of : transformations which are accepted for smaller values of can be applied with a greater degree of confidence. <\big-table|||||||||||||||||||||||||||||||||||,\,g>> for and ,512>>||||>|>>|,\,g>>|||||>|>|1>>>|>|>|>|>|>>|1/2>>>|1>>>|>|>|>|>|>>|n>>>|1>>>|>|>|>|>|>>|1>>>|>|>|>|>|>|>>|>|1>>>|>|>|>|>|>>|>|1>,n2>>>|>|>|>|>|>>|>|1>,n2>,n3>>>|>|>|>|>|>>|>|1>,n2>,n3>,n4>>>|>|>|>|>|>>|1/2>>>|1>>>|>|>|>|>|>>|1/2>>>|1>,n2>>>|>|>|>|>|>>|1/2>>>|1>,n2>,n3>>>|>|>|>|>|>>|1/2>>>|1>,n2>,n3>,n4>>>|>|>|>|>|>>>>>> Approximation error of a given sequence by a linear combination *g+\+\*g> of other given sequences ,\,g> on the range ,N}>, as a function of . A final point about the design of a good set > of atomic transformations is whether we allow several transformations to be acceptable for the same sequence (as in our general scheme), or rather insist on immediately finding the right transformation at each step. For instance, in section, we argued that -transformations are usually preferable over -transformations. However, in section, starting with 11/32>>>> instead of 11/32>>>> leads to an extrapolation of inferior quality. More generally, it is not rare that a better extrapolation is found using a slightly different transformation. This might be due to subtle cancellations in the asymptotic expansion which occur only for one of the transformations. However, a systematic search for magic transformations is quite time-consuming, so we strongly recommend to restrict such searches to the first few iterations. Moreover, one should avoid using the increased flexibility for loosening the acceptability criteria: a better relative error for a numeric extrapolation is no guarantee for a better asymptotic quality. For instance, in the case of an expansion <\equation*> f\0>*(log n)+\+a|n>, with and , we can only hope to gain decimal digits modulo an increase of the extrapolation order by . A simple interpolation <\equation*> f\0>|n> on the range N}> will often yield better results at the same order. Nevertheless, we may expect >\\) f> to be exceptionally small whenever >, which should never be the case for transformations of the form >\\\(>>\\) f>. Be guided by the asymptotics at appropriate moments! Whenever the asymptotic extrapolation scheme returns an incorrect answer, the computed exp-log extrapolations quickly tend to become absurd. Typically, we retrieve expansions of the type *n>> or >> instead of *n> or *log n>, where \0> is a small non-zero numeric parameter, which really should vanish from a theoretic point of view. In section, we have seen an example> of this phenomenon. Actually, the computation of exp-log extrapolations frequently does not terminate at all. We probably should abort computations when the transseries extrapolations involve too many asymptotic scales. The above phenomenon is interesting in itself, because it might enable us to improve the detection of incorrect expansions: when incorporating the computation of the exp-log extrapolations into the main scheme, we may check that the asymptotic assumptions made on the sequence are still satisfied by the resulting extrapolation. In this way, we might get rid of all extrapolations which are good from a numeric point of view on the range ,N}>, but incorrect for large N>. However, we have not investigated in detail yet whether our algorithms for the computation with transseries may themselves be a source of errors, especially in the case when we use summation up to the least term.\ Our extrapolation scheme may be generalized along several lines. First of all, the restriction to real-valued sequences is not really essential. Most of the ideas generalize in a straightforward way to complex sequences. Only the transformation log f> requires a bit more care. In order to compute , we first compute =log f> using the principal determination of the logarithm. We next compute the other terms using <\equation*> g=g+log |f>>. For more information about complex transseries, we refer to. Notice that the present scheme will only work well for complex transseries expansions which do not essentially involve oscillation. The expansion <\equation*> f=n>+3*n>+n7*\>++\ will typically fall in our setting, but our scheme will fail to recognize a sequence as simple as =sin n>. One approach for removing this drawback is discussed in. Another direction for generalization is to consider multivariate power series. For instance, assume that the coefficients ,n>> of a bivariate power series are known up to +n\N>. Then one may pick > and > with +N=N> and apply the scheme to the sequences +k,n>> in > for different small values of \\>. One may next try to interpolate the resulting expansions as a function of >. Alternatively, one might investigate a generalization of the scheme where discrete differentiation is applied in an alternate fashion with respect to > and >. Of course, in the multivariate case, it is expected that there are several asymptotic regions, according to the choice of ,N)>. In fact, the dependency of the asymptotic expansion on the choice of ,N)> does not only occur in the multivariate case: for the extrapolation recipes presented so far, it would be of interest to study the dependency of the obtained expansion as a function of . For instance, consider the sequence <\equation*> f=1>-100*nlog n>>, which admits a transfinite asymptotic expansion <\eqnarray*> ||>||+>+\+>>|||>+>+>+\+>>|||>+>+>+\+>>|||>>>> Depending on the value of some of the invisible terms log n>>, may suddenly become dominant from a numerical point of view. This phenomenon is best illustrated by reordering the terms in the expansion as a function of their magnitude, for different values of : <\eqnarray*> ||>|>|>+>+|n>++|n>+>+|n>+>+\>>|>|>|+>+>+>+>+>+>+>+>+\>>|>|>|+>+>+>+>+>+>+>+>+>+\>>|>|>|+>+>+>+>+>+>+>+>+>+\>>|>|>|+>+>+>+>+>+>+>+>+>+>+\>>|>|>|+>+>+>+>+>+>+>+>+>+>+\>>|>|>|+>+>+>+>+>+>+>+>+>+>+\>>>> It may therefore be interesting to apply the asymptotic extrapolation scheme for different values of and try to reconstruct the complete transfinite expansion by combining the different results. Here we notice that the detection of a term like > in the expansion for may help for the detection of the term in the second expansion for , since we may subtract it from the expansion (and ). We would like to thank and for various stimulating conversations and for their comments on a draft of this paper. We also thank the anonymous referees for their remarks and references. <\bibliography|bib|alpha|all.bib> <\bib-list|RSSvdH96> G.Alefeld and J.Herzberger. . Academic Press, New York, 1983. B.Beckermann and G.Labahn. A uniform approach for the fast computation of matrix-type Padé approximants. , pages 804--823, 1994. C.Brezinski and R.Zaglia. . North-Holland, Amsterdam, 1991. M.Canalis-Durand, F.Michel F., and M.Teisseyre. Algorithms for formal reduction of vector field singularities. , 7(1):101--125, January 2001. S.Caracciolo, A.J. Guttmann, I.Jensen, A.Pelissetto, A.N. Rogers, and A.D. Sokal. Correction-to-scaling exponents for two-dimensional self-avoiding walks. , 120:1037--1100, 2005. H.Derksen. An algorithm to compute generalized padé-hermite forms. Technical Report Rep. 9403, Catholic University Nijmegen, January 1994. J.Écalle. . Hermann, collection: Actualités mathématiques, 1992. P.Flajolet and R.Sedgewick. . Addison Wesley, Reading, Massachusetts, 1996. M.Grimmer, K.Petras, and N.Revol. Multiple precision interval packages: Comparing different approaches. Technical Report RR 2003-32, LIP, École Normale Supérieure de Lyon, 2003. D.Gruntz. . PhD thesis, E.T.H. Zürich, Switzerland, 1996. G.H. Hardy. . Cambridge Univ. Press, 1910. G.H. Hardy. Properties of logarithmico-exponential functions. , 10(2):54--90, 1911. I.Jensen and A.J. Guttmann. Self-avoiding polygons on the square lattice. , 32:4867--4876, 1999. A.K. Lenstra, H.W. Lenstra, and L.Lovász. Factoring polynomials with rational coefficients. , 261:515--534, 1982. R.E. Moore. . Prentice Hall, Englewood Cliffs, N.J., 1966. W.Pauls. . PhD thesis, University Nice-Sophia Antipolis, 2007. W.Pauls and U.Frisch. A Borel transform method for locating singularities of Taylor and Fourier series. , 127:1095--1119, 2007. W.Pauls, T.Matsumoto, U.Frisch, and J.Bec. Nature of complex singularities for the 2D Euler equation. Technical report, 2006. H.Poincaré. , volume Tôme II. Gauthier-Villars, Paris, 1893. G.Pólya. Kombinatorische Anzahlbestimmungen für Gruppen, Graphen und chemische Verbindungen. , 68:145--254, 1937. D.Richardson, B.Salvy, J.Shackell, and J.vander Hoeven. Expansions of exp-log functions. In Y.N. Lakhsman, editor, , pages 309--313, Zürich, Switzerland, July 1996. B.Salvy. . PhD thesis, École Polytechnique, France, 1991. J.Shackell. A differential-equations approach to functional equivalence. In , pages 7--10, Portland, Oregon, A.C.M., New York, 1989. ACM Press. J.Shackell. Growth estimates for exp-log functions. , 10:611--632, 1990. J.Shackell. Limits of Liouvillian functions. , 72:124--156, 1996. Appeared in 1991 as a technical report at the Univ. of Kent, Canterbury. B.Salvy and P.Zimmermann. Gfun: a Maple package for the manipulation of generating and holonomic functions in one variable. , 20(2):163--177, 1994. J.vander Hoeven. . PhD thesis, École polytechnique, Palaiseau, France, 1997. J.vander Hoeven. Complex transseries solutions to algebraic differential equations. Technical Report 2001-34, Univ. d'Orsay, 2001. J.vander Hoeven. Relax, but don't be too lazy. , 34:479--542, 2002. J.vander Hoeven et al. Mathemagix, 2002. . J.vander Hoeven. Algorithms for asymptotic interpolation. Technical Report 2006-12, Univ. Paris-Sud, 2006. J.vander Hoeven. Transserial Hardy fields. Technical Report 2006-37, Univ. Paris-Sud, 2006. Accepted for publication. J.vander Hoeven. , volume 1888 of . Springer-Verlag, 2006. J.vander Hoeven. Meta-expansion of transseries. Technical Report 2008-03, Université Paris-Sud, Orsay, France, 2008. Short version presented at ROGICS 2008, Mahdia, Tunesia. E.J. Weniger. Nonlinear sequence transformations: Computational tools for the acceleration of convergence and the summation of divergent series. Technical Report math.CA/0107080, Arxiv, 2001. H.S. Wilf. . Academic Press, 2nd edition, 2004. . <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Pol37 Wilf04 FS96 vdH:relax LLL82 BL94 Der94 SZ94 Wen01 BZ91 CMT01 JG99 CGJPRS05 Har10 Har11 Sh90 Sal:phd vdH:issac96 vdH:ln Ec92 vdH:metaexp vdH:mmx PMFB06 PF07 Pauls:phd vdH:interpolate vdH:ln vdH:hfsol Moo66 AH83 GPR03 Sh89 Sal:phd vdH:issac96 Sh96 Gru96 vdH:phd Ec92 vdH:issac96 vdH:phd vdH:ln vdH:metaexp vdH:metaexp vdH:metaexp Poin1893 LLL82 CGJPRS05 CGJPRS05 CGJPRS05 CGJPRS05 vdH:osc PF07 <\associate|table> <\tuple|normal> Results when applying our algorithm to several explicit sequences for |N=1000> and order |10>. In the table, we have shown the order of the best reductor and the corresponding relative error. > <\tuple|normal> Approximation error of a given sequence |f> by a linear combination |\*g+\+\*g> of other given sequences |g,\,g> on the range |{L,\,N}>, as a function of |L>. > <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Atomic transformations> |.>>>>|> |Finite limits |.>>>>|> > |Explicit dominant behaviour |.>>>>|> > |Other regular behaviour at infinity |.>>>>|> > |Composite transformations |.>>>>|> > |Transformation synthesis |.>>>>|> > |math-font-series||font-shape||3.A meta-algorithm for asymptotic extrapolation> |.>>>>|> |Power series expansions |.>>>>|> > |Logarithmic coefficients |.>>>>|> > |Unknown exponents |.>>>>|> > |Simple tails |.>>>>|> > |math-font-series||font-shape||4.Asymptotic extrapolations in exp-log form> |.>>>>|> |math-font-series||font-shape||5.Examples> |.>>>>|> |5.1.Explicit sequences |.>>>>|> > |5.2.Self avoiding walks on a triangular lattice |.>>>>|> > |math-font-series||font-shape||6.Possible improvements> |.>>>>|> |6.1.Inadequate transformations |.>>>>|> > |6.2.Inadequate criteria |.>>>>|> > |6.3.Multiple acceptable transformations |.>>>>|> > |6.4.Incorrect exp-log extrapolations |.>>>>|> > |math-font-series||font-shape||7.Possible generalizations> |.>>>>|> |Acknowledgment |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|>