.
This work has partially been supported by the ANR Gecko project.

The asymptotic behaviour of many univariate functions can only be expressed in generalized asymptotic scales, which are not merely formed of powers of a single variable. The computation of asymptotic expansions of functions in such generalized scales may lead to infinite cancellations, which complicate the design and implementation of practical algorithms. In this paper, we introduce a new heuristic technique of “metaexpansions”, which is both simple and efficient in practice, even though the answers are not guaranteed to be correct in general.

The asymptotic behaviour of many univariate functions can only be expressed in generalized asymptotic scales, which are not merely formed of powers of a single variable.
It was already noticed by Hardy [Har10, Har11] that many interesting functions arising in combinatorics, number theory or physics can be expanded w.r.t. scales formed by so called explog functions or Lfunctions. An explog function is constructed from an indeterminate and the real numbers using the field operations, exponentiation and logarithm. An Lfunction is defined similarly, by adding algebraic functions to our set of building blocks.
However, the class of functions which can be expanded with respect to a scale formed by explog functions (or Lfunctions) is not stable under several simple operations such as integration or functional inversion [Sha93, vdH97, MMvdD97]. More recently, explog functions have been generalized so as to allow for expressions with infinite sums, giving rise to the notion of transseries [DG86, É92]. In section 2, we briefly recall some of the most important definitions and properties. For more details, we refer to [vdH06c].
Given an explicit expression (such as an explog function), or the solution to an implicit equation, an interesting question is how to find its asymptotic expansion automatically. When working with respect to a generalized asymptotic scale, such as at infinity (), even simple expressions can lead to infinite cancellations:
In many cases, the detection of infinite cancellations can be reduced to the zerotest problem in a suitable class of functions [Sha90, GG92, RSSvdH96, Sal91, vdH97].
However, zerotest problems are often very hard. In the case of explog functions, a complete zerotest is only known if Schanuel's conjecture holds [Ric97, vdH98, vdHS06, Ric07]. If we want to expand more general expressions or more general solutions to differential or functional equations, the corresponding zerotest problem tends to get even harder. Consequently, the zerotesting tends to complicate the design of mathematically correct algorithms for more general asymptotic expansions. From the practical point of view, the implementation of robust zerotests also requires a lot of work. Moreover, mathematically correct zerotests tend to monopolize the execution time.
In this paper, we will investigate an alternative, more heuristic approach for the computation of asymptotic expansions. We will adopt a similar point of view as a numerical analyst who conceives a real number as the limit of a sequence of better and better approximations. In our case, the asymptotic expansion will be the limit of more and more precise polynomial approximations, where the monomials are taken in the asymptotic scale. As is often the case in numerical analysis, our approach will only be justified by informal arguments and the fact that it seems to work very well in practice.
Besides the analogy with numerical analysis there are a few additional interesting points which deserve to be mentioned. First of all, a finite sequence of better and better approximations gives rise to a second sequence , which can itself be encoded by a generating series
The computation of the expansion of can thus be reinterpreted as the computation of the expansion of , which we therefore regard as the “metaexpansion” of . This technique will be detailed in section 3. Additional complications arise in the context of transseries, because the elements of the asymptotic scale are themselves exponentials of other transseries. The computation of metaexpansions for transseries will be detailed in sections 4 and 5.
A second interesting aspect of metaexpansions is that we may operate on the metaexpansion without changing the underlying expansion. In a complex computation involving lots of auxiliary series, this provides some metacontrol to the user. For instance, some subexpressions can be computed with more accuracy (or less accuracy) and one can focus on a specific range of terms. Techniques for the acceleration of convergence play a similar role in numerical analysis [PTVF07, Section 5.3]. Another operation, called “stabilization”, removes those terms in the expansions which change at every few steps. After stabilization, we tend to compute only terms which occur in the final expansion of , even though they usually appear in a different order. In particular, stabilization gives rise to a heuristic zerotest. Metaoperations on metaexpansions will be discussed in section 6.
One motivation behind the present paper was its application to the
asymptotic extrapolation of sequences by transseries [vdH06a].
This application requires the computation of discrete sums and products
of transseries. In section 7, we have included a small
demonstration of our current implementation in the
For the purpose of this application, we have mainly considered univariate transseries expansions so far. Of course, the approach of our paper generalizes to expansions in several variables. A natural next step for future developments would be to implement the Newton polygon method for rather general functional equations. Another interesting question is how to reincorporate theoretically correct zerotests in our mechanism and how much we really sacrifice when using our heuristic substitute. A few ideas in these directions will be given in section 8, some of which actually go back to [vdH94].
In this section, we briefly survey some basic properties of transseries. For more details, we refer to [vdH06c, vdH97, É92, MMvdD97, MMvdD99].
Let be a ring and a commutative monomial monoid which is partially ordered by an asymptotic dominance relation . A subset is said to be wellbased if it wellquasiordered [Pou85, Mil85] for the opposite ordering of and gridbased if it satisfies a bound of the form
(1) 
A wellbased power series is a formal sum , whose support is wellbased. It is classical [Hah07, Hig52] that the set of wellbased power series forms a ring. The subset of gridbased power series (i.e. with gridbased support) forms a subring of .
Example
with for (i.e. ). Then the first series is gridbased and the second one only wellbased.
A family of series in is said to be wellbased if is wellbased and is finite for every . In that case, the sum with is again in . A linear mapping is said to be strong it preserves wellbased summation. Gridbased families and the corresponding notion of strong linearity are defined similarly.
In the case when is a field and is totally ordered, then and are also fields. Furthermore, any nonzero admits a unique dominant monomial with corresponding dominant coefficient and relative remainder such that . We call the dominant term of and define in the case when . The series also admits a canonical decomposition
Here just means that ; more generally, . If is gridbased, then so are , and . If is actually an ordered field, then so are and , by taking for all .
The field of gridbased transseries is a field of the form with additional operators and . The set of transmonomials coincides with the set of exponentials of transseries with . More generally, we have
for any (i.e. ) and
The construction of is detailed in [vdH06c, Chapter 4]. The construction of fields of wellbased transseries is a bit more delicate [DG86, vdH97, Sch01], because one cannot simultaneously ensure stability under exponentiation and infinite summation. However, there is a smallest such field , if we exclude transseries with arbitrarily nested logarithms or exponentials, such as .
Let be one of the fields or . Then admits a lot of additional structure:
There exists a unique strong derivation with , and for all .
There exists a unique strong integration with and for all .
For any positive, infinitely large transseries , there exists a unique strongly linear right composition , with (), and .
Each admits a unique functional inverse .
is real closed. Even better: given a differential polynomial and in with , there exists an with and .
Furthermore, there exist very general implicit function theorems [vdH01, vdH06c], which can be used in order to solve functional equations, when put in a suitable normal form.
The field is highly nonArchimedean. For what follows it will be introduce the asymptotic flatness relation
For , one has if and only if for all .
The intuitive idea behind metaexpansion is that, from the computational point of view, series usually arise as a sequence of successive approximations of an interesting object. We will make this simple idea operational by taking the approximations to be polynomials. In order to avoid repetitions, we will systematically work in the wellbased setting; it is easy to adapt the definitions to the gridbased case.
Let be an effective ring and an effective monomial monoid. Recall that stands for the ring of wellbased generalized power series and let be the corresponding set of polynomials (i.e. series with finite support). We define an expander to be a computable wellbased sequence of polynomials. Its sum will be called the result of the expander and we say that is an expander for . We also define an approximator to be a computable sequence , such that is wellbased and such that for each , there exists an for which is constant for . In that case, the limit is called the result of the approximator.
Clearly, the notions of expander and approximator are variants of another: if is an expander, then with defines an approximator with the same result. Similarly, if is an approximator, then with , () defines an expander with the same result. However, for certain purposes, expanders are more suitable, because an expander can be manipulated via its generating series
For other purposes though, approximators are the more natural choice. As far as notations are concerned, it is convenient to pass from expanders to approximators (and vice versa) by prefixing the index by a semicolon (resp. removing the semicolon).
We will denote by the set of approximable series in which admit an expander (or approximator). The corresponding set of expanders will be denoted by . Given , we use the notation to indicate that represents . Given , we will also use the notation to indicate that is a representation for . For more details on this convention, see [vdH07, Section 2.1].
In practice, expanders and approximators are usually implemented by pointers to an abstract class with a method to compute its coefficients. For more details on how to do this, we refer to [vdH02a]. Let us now show how to implement expanders and approximators for basic operations in .
It will be convenient to simply regard as a subset of .
Subtraction is treated in a similar way.
However, a subtlety occurs here. Consider for instance the case when and , so that and . Contrary to what happened in the case of addition, the definitions (6) and (7) do not coincide in the sense that . Indeed, we respectively find
As a general rule, the manipulation of expanders tends to be a bit more economic from the computational point of view, its coefficients being smaller in size.
Notice that the individual coefficients of need not be infinitesimal. Besides, the composition is usually not a polynomial. Therefore, we have forced to become infinitesimal using a multiplication by . This is justified by the fact that
Multiplication of by corresponds to delaying the approximation process of .
(9) 
Unfortunately, there exists no general algorithm for the computation of the dominant term . We will therefore assume the existence of an oracle for this purpose. In section 6, we will present a heuristic algorithm which can be used in practice.
and apply a fixedpoint theorem. In our case, this requires a wellbased operator for which we can prove that admits a wellbased limit in . The obvious expander for is given by
For some general fixedpoint theorems for sequences of this kind, we refer to [vdH01] and [vdH06c, Chapter 6].
In order to compute with wellbased transseries in , we take our coefficients in an effective subfield of and our monomials in an effective subgroup of . Moreover, the monomials in which are not iterated logarithms are themselves exponentials of approximable transseries.
More precisely, elements in are represented by monomials which are of one of the following forms:
either for some ;
or , with .
In the first case, the exponential height of is defined to be zero and in the second case, we set .
Elements are multiplied using and inverted using . Here : if , then ; if , then . The asymptotic ordering on is implemented using
and therefore relies on an oracle for the computation of . Setting , we notice that heuristic algorithms for the computation of this dominant term recursively need to compare elements in with respect to . The termination of the recursion is based on the fact that .
The main additional operations for the manipulation of transseries are exponentiation and logarithm. Since exponentiation relies on canonical decompositions, we start with the general operation of restriction of support.
If the subset is a computable, i.e. admits a computable membership test, then the mapping is computable. In particular, given , we may compute using
Now making continued use of our oracle for the computation of dominant terms, the sets , and are computable. Consequently, we have algorithms to compute , and .
Example
When computing an expander with the routines presented so far, we obtain
Remark
for some .
.
.
In the gridbased setting TB2 may be replaced by the stronger requirement that can be expanded w.r.t. for all .
Let us now come to the more interesting operations on transseries. Differentiation and composition rely on the general principle of extension by strong linearity.
It can be shown [vdH06c, vdH97] that the derivation w.r.t. admits a unique strongly linear right inverse with the “distinguished property” that for all . One way to construct is to first compute its “trace” which is the unique strongly linear operator with on . We then compute by solving the implicit equation
(10) 
One may either apply (10) for monomials and extend by strong linearity, or apply it directly for arbitrary transseries .
We next extend by strong linearity.
and define an expander for this operator:
Then distinguished integration can be regarded as the application of this operator expander to another expander :
We also notice that is a fixedpoint of the operator
Adapting our general mechanism for the computation of fixed points for operators instead of series, we find as the natural expander of .
Functional inversion of transseries can be done using formulas in [vdH06c, Section 5.4] and we will not detail this operation here. Two other interesting operations are finite differences and discrete summation:
We implemented these operations because they are critically needed in an algorithm for the asymptotic extrapolation of sequences [vdH06a]. Modulo compositions with and , they are related to finite quotients and discrete products:
Our algorithm for rightcomposition clearly yields a way to compute for . The distinguished summation is the unique distinguished strongly linear rightinverse of , i.e. and for all . It therefore suffices to show how to compute for monomials . Three different cases need to be distinguished :
which yields a solution
The application of the operator
to is computed in a similar way as in the case of distinguished integration. In fact, the expander can directly be applied to expanders with for all . Moreover, this application preserves gridbasedness.
(11) 
We rewrite this equation in operator form
and we invert the operator as in the flat case. No integration is needed this time, since . Again, the gridbased property is preserved by moderate discrete summation.
If , then this is done by computing a fixed point for the operator
If , then we compute a fixed point for the operator
It can be shown that is gridbased if is gridbased and there exists a such that for all .
So far, we have not really exploited the extra level of abstraction provided by expanders. In this section, we will describe several “metaoperations” on expanders. These operations do not affect the series being represented, but rather concern qualitative aspects of the approximation process: they guide the rate of convergence, the terms which appear first, etc. Based on the process of “stabilization”, we will also describe a heuristic zerotest and a heuristic method for the computation of dominant terms.
In the case when is a constant function, we have
The shortening operator is typically used for the expansion of expressions which involve an expander , such that the expression size of tends to grow very rapidly with . For instance, we may prefer to compute a sum using the expander instead of .
In the case when is a constant function, we have
During the expansion of an expression, the lengthening operator may for instance be used in order to boost the precision of a subexpression. We may also use it as a substitute for the order parameter of a typical expansion command. E.g., we would simply display in order to show an additional terms of .
The stabilization operator removes all terms from the expansion which are still subject to changes during the next approximations. Even for small values of , such as , we usually have
(12) 
where
In particular, the successive approximations usually only contain terms which occur in the final result .
Example
Example
which arises during the computation of
Indeed, the first terms of are given by
In this kind of situations, it may be necessary to consider more powerful stabilizations of the form .
(13) 
for a sufficiently large value of . When taking and fixed, the formula (13) also provides us with a reasonable heuristic for the computation of (which implies a zerotest for ). Of course, the values and should not be taken too small, so as to provide sufficient robustness. On the other hand, large values of and may lead to unacceptable computation times. Our current compromise has worked for all practical examples we have tried so far.
Remark
Remark
Generalizing example 5, we see that we must take . Would be sufficient?
(14) 
with , , and . In (14), we now replace each nonzero by the expression , and print the result. For instance, if
then we print
An interesting feature of this way of printing is that it allows us to see some of the remaining terms after the first leading terms. In certain cases, such as
one might prefer to suppress some of these extra terms. One criterion for suppression could be the following: given the last term of some and any term of , suppress all terms with for some .
In the case when , then we have
We call a dominant bias operator. We may typically apply it on auxiliary series with a sharp increase of the number of terms of with . More generally, it is possible to define operators which favour terms at the tail, in the middle, or close to a specified monomial. However, these generalization do not seem to have any practical applications, at first sight.
Most of the algorithms described in this paper have been implemented
inside the
Mmx] 
use "numerix"; use "algebramix"; use "multimix"; use "symbolix"; 
Mmx] 
x == infinity ('x); 
Mmx] 
1 / (x + 1) 
Mmx] 
1 / (x + log x + log log x) 
Mmx] 
1 / (1 + 1/x + 1/exp x) 
Mmx] 
1 / (1 + 1/x + 1/exp x)  1 / (1 + 1/x) 
Mmx] 
exp (x + exp (exp x))  exp (x) 
Mmx] 
exp (exp (x) / (x + 1)) 
Mmx] 
derive (exp (exp (x) / (x + 1)), x) 
Mmx] 
integrate (exp (x^2), x) 
Mmx] 
integrate (x^x, x) 
Mmx] 
sum (x^4, x) 
Mmx] 
product (x, x) 
Mmx] 
lengthen (product (x, x), 8) 
Mmx] 
product (log x, x) 
Mmx] 
fixed_point (f :> log x + f @ (log x)) 
Mmx] 
la == derive (fixed_point (f :> log x + f @ (log x)), x) 
Mmx] 
mu == la * la + 2 * derive (la, x) 
Mmx] 
fixed_point (f :> 1/x + f @ (x^2) + f @ (x^x)) 
Even though the expansion algorithms developed so far are usually sufficient for applications, they lack robustness in several ways. First of all, we have used heuristic algorithms for zerotesting and the computation of dominant terms. Some of our algorithms crucially depend on the correctness of these heuristic algorithms. For instance, our algorithm for the computation of an inverse yields an erroneous result if is computed incorrectly. Finally, expanders only asymptotically tend to . Even if we know that a given monomial is in the support of , we do not know how large should be in order to guarantee that . In this section, we describe a few ideas which may be used to increase the robustness of our algorithms.
Then we may define a new expander by taking the diagonal
In order to have , it now suffices to have (), instead of . Of course, in nasty cases, it might still happen that for all . In other words, the strategy of autocorrection produces no miracles and does not substitute for a genuine zerotest. Even in the case when the stabilization operator is sufficiently powerful to guarantee for a certain class of expanders, one should keep in mind that the result only becomes correct at the limit; we still don't know how many terms need to be computed.
From a practical point, the strategy of autocorrection is easy to implement on the series level from section 3: in our example of inversion, the expander may simply keep in memory for the largest considered so far and only update its value when changes. Implementations become more involved when considering recursive transseries expansions, as in section 4. Indeed, in this more general setting, we also need to correct erroneous outcomes for the asymptotic ordering , which recursively relies on the orderings and for expanders of lower exponential height. In order to generalize the idea, one thus has to define sequences of approximations and for and , and systematically work with the relations and when making decisions for expansions at stage .
An interesting question is whether we can use a zerotest in in order to design a nonheuristic algorithm for the computation of dominant terms. In order to make this work, we have to be able to detect infinite cancellations of terms which occur in expressions such as . A general mechanism for doing this is to refine the mechanism of expanders by indexing over a suitable wellbased set.
More precisely, given an abstract wellbased set (i.e. a set which is wellquasiordered for the opposite ordering of ), we define a sequence of finite subsets of by . In general, the sequence is transfinite, but if we have , then we say that is accessible. We say that is computable, if for any finite subset , we can compute . In particular, this implies the sequence to be computable.
A wellbased expander is a computable wellbased family , indexed by a computable accessible wellbased set . A wellbased expander is the natural refinement of an expander in the usual sense, by regrouping terms . We say that is a termwise wellbased expander if each is of the form and the mapping is increasing. Notice that is automatically wellbased if is increasing.
Recall that the initial segment generated by a finite subset is defined by . Now consider a termwise wellbased expander such that for any finite subset . If the mapping is effective, then we call an expander over . For the operations we have considered in this paper, it should be possible to replace the usual notion of expander by expanders of (assuming that is stable under the operation). This was already shown in [vdH94] for the basic operations from section 3 and still waits to be worked out for the other ones.
Given an expander over , the zerotest in may now be used in order to compute the set of dominant monomials of . The algorithm again goes back to [vdH94]:
Let .
Replace by a minimal subset with .
Let and for each .
If there exists an with , then set
and go to step 2.
Return .
There are really two aspects to a computable wellbased series . On the one hand, we should be able to compute its coefficient for any monomial . On the other hand, its support should be sufficiently effective. In the case of ordinary power series , the second issue does not really arise, because the support is necessarily included in the wellbased set . In general, one might require that is given by a termwise wellbased expander, which yields quite a lot of information about .
As to the computation of coefficients , consider the case of a product , where is totally ordered and and are given by and with and . Given , we hit the problem that we don't have any a priori information on the asymptotic behaviour of the and the . In order to design an algorithm for the computation of , we need more control over this asymptotic behaviour.
In the gridbased setting, we are really computing with multivariate power series, and no real difficulties arise. In the wellbased setting, things get more involved. When we restrict our attention to transseries, there are ways to represent and compute with monomial cuts [vdH06c, Chapter 9]. A monomial cut is the analogue of a Dedekind cut for the set of transmonomials instead of . Given a termwise wellbased expander , any initial segment naturally induces a transseries and a monomial cut, called the width of . For a fully satisfactory definition of computable wellbased series, one should be able to compute these widths. However, we have not investigated this matter in detail yet.
[DG86] B. I. Dahn and P. Göring. Notes on exponentiallogarithmic terms. Fundamenta Mathematicae, 127:45–50, 1986.
[É92] J. Écalle. Introduction aux fonctions analysables et preuve constructive de la conjecture de Dulac. Hermann, collection: Actualités mathématiques, 1992.
[GG92] G.H. Gonnet and D. Gruntz. Limit computation in computer algebra. Technical Report 187, ETH, Zürich, 1992.
[Hah07] H. Hahn. Über die nichtarchimedischen Größensysteme. Sitz. Akad. Wiss. Wien, 116:601–655, 1907.
[Har10] G.H. Hardy. Orders of infinity. Cambridge Univ. Press, 1910.
[Har11] G.H. Hardy. Properties of logarithmicoexponential functions. Proceedings of the London Mathematical Society, 10(2):54–90, 1911.
[Hig52] G. Higman. Ordering by divisibility in abstract algebras. Proc. London Math. Soc., 2:326–336, 1952.
[Mil85] E. C. Milner. Basic wqo and bqo theory. In Rival, editor, Graphs and orders, pages 487–502. D. Reidel Publ. Comp., 1985.
[MMvdD97] A. Macintyre, D. Marker, and L. van den Dries. Logarithmicexponential power series. Journal of the London Math. Soc., 56(2):417–434, 1997.
[MMvdD99] A. Macintyre, D. Marker, and L. van den Dries. Logarithmic exponential series. Annals of Pure and Applied Logic, 1999. To appear.
[Pou85] M. Pouzet. Applications of well quasiordering and better quasiordering. In Rival, editor, Graphs and orders, pages 503–519. D. Reidel Publ. Comp., 1985.
[PTVF07] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical recipes, the art of scientific computing. Cambridge University Press, 3rd edition, 2007.
[Ric97] D. Richardson. How to recognise zero. JSC, 24:627–645, 1997.
[Ric07] D. Richardson. Zero tests for constants in simple scientific computation. MCS, 1(1):21–37, 2007.
[RSSvdH96] D. Richardson, B. Salvy, J. Shackell, and J. van der Hoeven. Expansions of explog functions. In Y.N. Lakhsman, editor, Proc. ISSAC '96, pages 309–313, Zürich, Switzerland, July 1996.
[Sal91] B. Salvy. Asymptotique automatique et fonctions génératrices. PhD thesis, École Polytechnique, France, 1991.
[Sch01] M.C. Schmeling. Corps de transséries. PhD thesis, Université ParisVII, 2001.
[Sha90] J. Shackell. Growth estimates for explog functions. Journal of Symbolic Computation, 10:611–632, 1990.
[Sha93] J. Shackell. Inverses of Hardy Lfunctions. Bull. of the London Math. Soc., 25:150–156, 1993.
[vdH94] J. van der Hoeven. Outils effectifs en asymptotique et applications. Technical Report LIX/RR/94/09, LIX, École polytechnique, France, 1994.
[vdH97] J. van der Hoeven. Automatic asymptotics. PhD thesis, École polytechnique, Palaiseau, France, 1997.
[vdH98] J. van der Hoeven. Generic asymptotic expansions. AAECC, 9(1):25–44, 1998.
[vdH01] J. van der Hoeven. Operators on generalized power series. Journal of the Univ. of Illinois, 45(4):1161–1190, 2001.
[vdH02a] J. van der Hoeven. Relax, but don't be too lazy. JSC, 34:479–542, 2002.
[vdH02b] J. van der Hoeven et al. Mathemagix, 2002. http://www.mathemagix.org.
[vdH06a] J. van der Hoeven. Algorithms for asymptotic interpolation. Technical Report 200612, Univ. ParisSud, 2006. Submitted to JSC.
[vdH06b] J. van der Hoeven. Counterexamples to witness conjectures. JSC, 41:959–963, 2006.
[vdH06c] J. van der Hoeven. Transseries and real differential algebra, volume 1888 of Lecture Notes in Mathematics. SpringerVerlag, 2006.
[vdH07] J. van der Hoeven. On effective analytic continuation. MCS, 1(1):111–175, 2007.
[vdHS06] J. van der Hoeven and J.R. Shackell. Complexity bounds for zerotest algorithms. JSC, 41:1004–1020, 2006.