> <\body> |<\doc-note> This work has partially been supported by the ANR Gecko project. ||<\author-address> CNRS, de Mathématiques ( 425) Université Paris-Sud 91405 Orsay Cedex France Email: >|>||> <\abstract> The asymptotic behaviour of many univariate functions can only be expressed in generalized asymptotic scales, which are not merely formed of powers of a single variable. The computation of asymptotic expansions of functions in such generalized scales may lead to infinite cancellations, which complicate the design and implementation of practical algorithms. In this paper, we introduce a new heuristic technique of ``meta-expansions'', which is both simple and efficient in practice, even though the answers are not guaranteed to be correct in general. >>>>>>>>>>>> The asymptotic behaviour of many univariate functions can only be expressed in generalized asymptotic scales, which are not merely formed of powers of a single variable. It was already noticed by Hardy that many interesting functions arising in combinatorics, number theory or physics can be expanded scales formed by so called exp-log functions or L-functions. An is constructed from an indeterminate and the real numbers using the field operations, exponentiation and logarithm. An function> is defined similarly, by adding algebraic functions to our set of building blocks. However, the class of functions which can be expanded with respect to a scale formed by exp-log functions (or L-functions) is not stable under several simple operations such as integration or functional inversion. More recently, exp-log functions have been generalized so as to allow for expressions with infinite sums, giving rise to the notion of transseries. In section, we briefly recall some of the most important definitions and properties. For more details, we refer to. Given an explicit expression (such as an exp-log function), or the solution to an implicit equation, an interesting question is how to find its asymptotic expansion automatically. When working with respect to a generalized asymptotic scale, such as >*\*x>:\,\\\}> at infinity (\>), even simple expressions can lead to infinite cancellations: <\eqnarray*> ->>->>||1++>+\+>+>+\\\-1++>+\>>||>|>+>+*\>+\.>>>> In many cases, the detection of infinite cancellations can be reduced to the zero-test problem in a suitable class of functions. However, zero-test problems are often very hard. In the case of exp-log functions, a complete zero-test is only known if Schanuel's conjecture holds. If we want to expand more general expressions or more general solutions to differential or functional equations, the corresponding zero-test problem tends to get even harder. Consequently, the zero-testing tends to complicate the design of mathematically correct algorithms for more general asymptotic expansions. From the practical point of view, the implementation of robust zero-tests also requires a lot of work. Moreover, mathematically correct zero-tests tend to monopolize the execution time. In this paper, we will investigate an alternative, more heuristic approach for the computation of asymptotic expansions. We will adopt a similar point of view as a numerical analyst who conceives a real number as the limit of a sequence of better and better approximations. In our case, the asymptotic expansion will be the limit of more and more precise polynomial approximations, where the monomials are taken in the asymptotic scale. As is often the case in numerical analysis, our approach will only be justified by informal arguments and the fact that it seems to work very well in practice. Besides the analogy with numerical analysis there are a few additional interesting points which deserve to be mentioned. First of all, a finite sequence >,>,\> of better and better approximations gives rise to a second sequence >=>,>=>->,>=>->,\>, which can itself be encoded by a generating series <\equation*> >(z)=\>>*z. The computation of the expansion of \>>> can thus be re-interpreted as the computation of the expansion of >>, which we therefore regard as the ``meta-expansion'' of. This technique will be detailed in section. Additional complications arise in the context of transseries, because the elements of the asymptotic scale are themselves exponentials of other transseries. The computation of meta-expansions for transseries will be detailed in sections and. A second interesting aspect of meta-expansions is that we may operate on the meta-expansion without changing the underlying expansion. In a complex computation involving lots of auxiliary series, this provides some meta-control to the user. For instance, some subexpressions can be computed with more accuracy (or less accuracy) and one can focus on a specific range of terms. Techniques for the acceleration of convergence play a similar role in numerical analysis. Another operation, called ``stabilization'', removes those terms in the expansions >> which change at every few steps. After stabilization, we tend to compute only terms which occur in the final expansion of , even though they usually appear in a different order. In particular, stabilization gives rise to a heuristic zero-test. Meta-operations on meta-expansions will be discussed in section. One motivation behind the present paper was its application to the asymptotic extrapolation of sequences by transseries. This application requires the computation of discrete sums and products of transseries. In section, we have included a small demonstration of our current implementation in the system. For the purpose of this application, we have mainly considered univariate transseries expansions so far. Of course, the approach of our paper generalizes to expansions in several variables. A natural next step for future developments would be to implement the Newton polygon method for rather general functional equations. Another interesting question is how to re-incorporate theoretically correct zero-tests in our mechanism and how much we really sacrifice when using our heuristic substitute. A few ideas in these directions will be given in section, some of which actually go back to. In this section, we briefly survey some basic properties of transseries. For more details, we refer to. Let be a ring and > a commutative monomial monoid which is partially ordered by an >>. A subset \\> is said to be if it well-quasi-ordered for the opposite ordering of >> and if it satisfies a bound of the form <\equation> \\\>*\*\>*{\,\,\}(\,\,\\1). A is a formal sum \\>f>*\>, whose support \\:f>\0}> is well-based. It is classical that the set ]]> of well-based power series forms a ring. The subset >> of grid-based power series ( with grid-based support) forms a subring of]]>. <\example> Consider the series <\eqnarray*> ||1>-x\>>=1+x1>+x2>+x\>+x3>+x\-1>+x4>+x\\2>+x5>+x2*\>+\>>|||1>+g(x>)=x1>+x\>+x\>+x\>+\,>>>> with =x>> for \> ( >\x>\\\\>). Then the first series is grid-based and the second one only well-based. A family )I>> of series in ]]> is said to be if I>supp f> is well-based and I:\\supp f}> is finite for every \\>. In that case, the sum I>f> with >=I>f>> is again in ]]>. A linear mapping :R[[\]]\R[[\]]> is said to be it preserves well-based summation. Grid-based families and the corresponding notion of strong linearity are defined similarly. In the case when is a field and > is totally ordered, then >> and ]]> are also fields. Furthermore, any non-zero \[[\]]>> admits a unique =max> supp f> with corresponding =f>> and> such that *\*(1+\)>. We call =c*\> the of and define =0> in the case when . The series also admits a <\equation*> ||>>||>>||>>>|||>||>||>>|||\1>f>*\>||\1>f>*\>||\1>f>*\>>>>> Here \1> just means that =1>; more generally, \\\\\\\\>. If is grid-based, then so are >, >> and >>. If is actually an ordered field, then so are >> and ]]>, by taking 0\c\0> for all . The field =\> of grid-based transseries is a field of the form =\>> with additional operators and . The set of > coincides with the set >> of exponentials of transseries with >=f>. More generally, we have <\eqnarray*> ||+log c+log (1+\)>>|>||->*\+>*\+\,>>>> for any \>> ( 0>) and <\eqnarray*> ||>*exp f>*exp f>>>|>>||>+>*f>+>*f>+\.>>>> The construction of > is detailed in. The construction of fields of well-based transseries is a bit more delicate, because one cannot simultaneously ensure stability under exponentiation and infinite summation. However, there is a smallest such field [[[x]]]>, if we exclude transseries with arbitrarily nested logarithms or exponentials, such as >. Let > be one of the fields > or [[[x]]]>. Then > admits a lot of additional structure: <\enumerate> There exists a unique strong derivation f> with =0>, =1> and )=f*\> for all \>. There exists a unique strong integration f> with f)=f> and f)>=0> for all \>. For any positive, infinitely large transseries \,\>>, there exists a unique strongly linear right composition f\g>, with g> (\>), g=g> and \g=\g>>. Each \,\>> admits a unique functional inverse >. > is real closed. Even better: given a differential polynomial \{F}> and g> in > with 0>, there exists an \> with h\g> and . Furthermore, there exist very general implicit function theorems, which can be used in order to solve functional equations, when put in a suitable normal form. The field > is highly non-Archimedean. For what follows it will be introduce the asymptotic flatness relation <\eqnarray*> g>|>|log g>>|g>|>|log g>>>> For 1>, one has g> if and only if >\g> for all \\>. The intuitive idea behind meta-expansion is that, from the computational point of view, series usually arise as a sequence of successive approximations of an interesting object. We will make this simple idea operational by taking the approximations to be polynomials. In order to avoid repetitions, we will systematically work in the well-based setting; it is easy to adapt the definitions to the grid-based case. Let be an effective ring and > an effective monomial monoid. Recall that ]]> stands for the ring of well-based generalized power series and let ]> be the corresponding set of polynomials ( series with finite support). We define an to be a computable well-based sequence >=(>)\R[\]>> of polynomials. Its sum >|^>=\>>>> will be called the of the expander and we say that >> is an expander . We also define an to be a computable sequence >=(>)\R[\]>>, such that \>supp f> is well-based and such that for each \\>, there exists an \\> for which >\>>> is constant for n>. In that case, the limit =lim >=\\>f>*\> is called the of the approximator. Clearly, the notions of expander and approximator are variants of another: if >=(>)> is an expander, then >=\ >> with >=>+\+>> defines an approximator with the same result. Similarly, if >=(>)> is an approximator, then >=\ >> with >=>>, >=>->>(0>) defines an expander with the same result. However, for certain purposes, expanders are more suitable, because an expander >> can be manipulated its generating series <\equation*> >(z)=\>>*z. For other purposes though, approximators are the more natural choice. As far as notations are concerned, it is convenient to pass from expanders >> to approximators >> (and ) by prefixing the index by a semicolon ( removing the semicolon). We will denote by ]]> the set of in ]]> which admit an expander (or approximator). The corresponding set of expanders will be denoted by ]]|\>>. Given R[[\]]>, we use the notation > to indicate that represents >. Given R[[\]]>, we will also use the notation >> to indicate that >\]]|\>> is arepresentation for . For more details on this convention, see2.1>. In practice, expanders and approximators are usually implemented by pointers to an abstract class with a method to compute its coefficients. For more details on how to do this, we refer to. Let us now show how to implement expanders and approximators for basic operations in ]]>. Given a polynomial R[\]>, an expander and approximator for are given by <\eqnarray*> >(z)>||>|>>||>>> It will be convenient to simply regard ]> as a subset of ]]|\>>. Given R[[\]]>, we may compute an expander and an approximator for by <\eqnarray*> >(z)>||>(z)+>(z)>>|>>||>+>>>>> Subtraction is treated in a similar way. We may define expanders and approximators for products in the same way as for addition: <\eqnarray*> >(z)>||>(z)*>(z)>>|>>||>*>>>>> However, a subtlety occurs here. Consider for instance the case when ]]=R[[u]]> and >=>=1/(1-u*z)>, so that and >=>=1+\+u>. Contrary to what happened in the case of addition, the definitions() and() do not coincide in the sense that >\\ >>. Indeed, we respectively find <\eqnarray*> >>||+n*u>>|>>||+n*u+\+2*u+u>>>> As a general rule, the manipulation of expanders tends to be a bit more economic from the computational point of view, its coefficients being smaller in size. Let R[[t]]> be a computable formal power series and let R[[\]]>> be infinitesimal. Then we may compute an expander for h> using <\eqnarray*> >(z)>||>(z)).>>>> Notice that the individual coefficients of >> need not be infinitesimal. Besides, the composition f> is usually not a polynomial. Therefore, we have forced >> to become infinitesimal using a multiplication by . This is justified by the fact that <\equation*> h=>|^>, Multiplication of >> by corresponds to delaying the approximation process of . Assume now that is a field and > a totally ordered group. The inverse of a series R[[\]]> may be computed using left composition with power series: <\equation> g=>*(1-\+\-\+\)=>*(1+t)1>\\ Unfortunately, there exists no general algorithm for the computation of the dominant term>. We will therefore assume the existence of an oracle for this purpose. In section, we will present a heuristic algorithm which can be used in practice. A general technique for the resolution of functional equations is to rewrite then into the form <\equation*> f=\(f) and apply a fixed-point theorem. In our case, this requires a well-based operator :R[[\]]\R[[\]]> for which we can prove that (0),\(\(0)),\(\(\(0))),\> admits awell-based limit in ]]>. The obvious expander for is given by <\equation*> >=\(0). For some general fixed-point theorems for sequences >> of this kind, we refer to and 6>. In order to compute with well-based transseries in [[[x]]]\\[[\]]>, we take our coefficients in an effective subfield of > and our monomials in an effective subgroup > of >. Moreover, the monomials in > which are not iterated logarithms are themselves exponentials of approximable transseries. More precisely, elements in > are represented by monomials |\>\|\>> which are of one of the following forms: <\enumerate> either |\>=\=log x> for some \>; or |\>=exp >>, with >\]]>|\>>. In the first case, the exponential height |\>>> of |\>\|\>> is defined to be zero and in the second case, we set |\>>=1+max|\>\supp log |\>> h|\>>>. Elements ,\\\> are multiplied using |\>*|\>=exp (log |\>+log |\>)> and inverted using |\>1>=exp (\log |\>)>. Here |\>\]]>|\>>: if |\>=log x>, then |\>=log x>; if |\>=exp >>, then |\>=>>. The asymptotic ordering >> on > is implemented using <\equation*> \\\\log \\log \\\/\)>\0, and therefore relies on an oracle for the computation of /\)>>. Setting =\/\>, we notice that heuristic algorithms for the computation of this dominant term recursively need to compare elements > in > with respect to >>. The termination of the recursion is based on the fact that |\>>\h|\>>>. The main additional operations for the manipulation of transseries are exponentiation and logarithm. Since exponentiation relies on canonical decompositions, we start with the general operation of restriction of support. Given \\>, we define the restriction of R[[\]]> to > by <\equation*> f>=\\>f>*\ If the subset > is a computable, > admits a computable membership test, then the mapping ]\R[\];f\f>> is computable. In particular, given R[[\]]>, we may compute>> using <\equation*> (>|\>)=(>)>. Now making continued use of our oracle for the computation of dominant terms, the sets >={\\\:\\1}>, >={1}> and >={\\\:\\1}> are computable. Consequently, we have algorithms to compute >=f>>>, >=f>>> and >=f>>>. Assume that is closed under logarithm (for positive elements) and exponentiation. Then the formulas(--) and our algorithms for canonical decomposition and left composition with power series yield a way to compute logarithms and exponentials of elements in ]]>. The smallest subfield of > which is stable under exponentiation and logarithm is called the field of . There exists a zero-test for this field which relies on Schanuel's conjecture for its termination. <\example> Consider the following example from: <\equation*> f=log log (x*\>+1)-exp exp (log log x+>). When computing an expander >> with the routines presented so far, we obtain <\eqnarray*> |>>||>|>>||>+>>>|>>||>+>- x|2*x*\>-*\>-*\>>>|>>||+>+>- x|2*x*\>-*\>-*\>+ x|3*x*\>+ x|x*\>+*\>+*\>>>|>>|| x|2*x>-->+>+>- x|2*x*\>-*\>-*\>+ x|3*x*\>+ x|x*\>+*\>+*\>- x|4*x*\>- x|x*\>- x|2*x*\>-*\>-*\>>>|>>|| x|2*x>-- x|2*x>->->+>+>- x|2*x*\>-*\>-*\>+ x|3*x*\>+ x|x*\>+*\>+*\>- x|4*x*\>- x|x*\>- x|2*x*\>-*\>-*\>+ x|5*x*\>+ x|x*\>+ x|x*\>+ x|x*\>+*\>+*\>>>>> <\remark> In practice, it is useful to have efficient algorithms for the manipulation of transmonomials. In a similar way as in, we therefore write transmonomials as power products =\>*\*\>> with respect to a ,\,\)> which is constructed incrementally during the computations. In our well-based setting, we merely require that the > satisfy the hypotheses <\description> =log x> for some \>. ,\,\\exp R[[\]]>>. \\\log \>. In the grid-based setting may be replaced by the stronger requirement that > can be expanded ,\,\> for all {1,\,n-1}>. Let us now come to the more interesting operations on transseries. Differentiation and composition rely on the general principle of extension by strong linearity. Let :\\R[[\]]> be a map such that each well-based subset > of > is mapped into a well-based family (\))\\>>. Then there exists a unique strongly linear extension :R[[\]]\R[[\]]> of >. If :\\R[[\]]> is computable, then we may compute the restriction of > to ]]> by <\equation*> \(>)=\supp >>>>*(\)|\>. The derivative of an approximable transseries in ]]> is computed using extension by strong linearity. The derivative of a transmonomial is computed recursively: x)=1/(x*\*log x)> and =f*(exp f)>. Right composition with a fixed R[[\]],\>> is done similarly. For arbitrary transseries in ]]>, we use extension by strong linearity. Transmonomials are handled recursively: x)\g=log g> and g=exp (f\g)>. It can be shown that the derivation admits a unique strongly linear right inverse > with the ``distinguished property'' that f)>=0> for all . One way to construct > is to first compute its ``trace'' >:R[[\]]\R[[\]]> which is the unique strongly linear operator with f>> on >. We then compute f> by solving the implicit equation <\equation> f=T f+(f-(T f)). One may either apply() for monomials and extend by strong linearity, or apply it directly for arbitrary transseries . The trace =T> \> of a transmonomial is computed using the formula <\equation*> T \=|\>>>||log \\x>>|)\exp)]\log>||>>>>> We next extend by strong linearity. We may rewrite() in operator form <\equation*> =T*(1+(1-\*T)+(1-\*T)+\) and define an expander for this operator: <\equation*> |\>(z)=>T*(1-\*T)*z. Then distinguished integration can be regarded as the application of this operator expander to another expander >>: <\equation*> (f|\>)(z)=|\>(z) >(z)=>>|\>n> >*z. We also notice that > is a fixed-point of the operator <\equation*> \T+*(1-\*T). Adapting our general mechanism for the computation of fixed points for operators instead of series, we find |\>(z)> as the natural expander of >. Functional inversion of transseries can be done using formulas in5.4> and we will not detail this operation here. Two other interesting operations are and : <\eqnarray*> f>||(x+1)-f>>| f>||1> f.>>>> We implemented these operations because they are critically needed in an algorithm for the asymptotic extrapolation of sequences. Modulo compositions with and , they are related to and : <\eqnarray*> f>||(x+1)|f>=exp \ log f>>| f>|| log f.>>>> Our algorithm for right-composition clearly yields a way to compute f> for R[[\]]>. The distinguished summation > is the unique distinguished strongly linear right-inverse of>, \ f=f> and f)>=0> for all. It therefore suffices to show how to compute \> for monomials \\>. Three different cases need to be distinguished: Assuming \\> ( \x>), we compute \> by solving the equation <\equation*> \ f=(\>-1) f=\+*\+*\+\ f=\, which yields a solution <\equation*> f=\+-\>|\*(\>-1)> \=\+\+*\ -*\ +*\+\ \. The application of the operator <\equation*> \(\)=-\>|\*(\>-1)> to > is computed in a similar way as in the case of distinguished integration. In fact, the expander |\>(z)=\(z*\)> can directly be applied to expanders >> with \\> for all \supp f>. Moreover, this application preserves grid-basedness. In the case when \\> ( \x>), let R> be such that =\*\> with \\>. We now search for a solution to f=\> of the form >, which leads to the equation <\equation> \*g\(x+1)-g=\, We rewrite this equation in operator form <\equation*> (\*\>-1)(g)=\-1+\*\+|2>*\+\=\ and we invert the operator *\>-1> as in the flat case. No integration is needed this time, since -1\0>. Again, the grid-based property is preserved by moderate discrete summation. In the case when \\> ( \x>), we have to solve the equation <\equation*> f\(x+1)-f=\. If \1>, then this is done by computing a fixed point for the operator <\equation*> f\-\+f\(x+1). If \1>, then we compute a fixed point for the operator <\equation*> f\\\(x-1)+f\(x-1). It can be shown that f> is grid-based if is grid-based and there exists a such that \\>> for all \supp f>. So far, we have not really exploited the extra level of abstraction provided by expanders. In this section, we will describe several ``meta-operations'' on expanders. These operations do not affect the series being represented, but rather concern qualitative aspects of the approximation process: they guide the rate of convergence, the terms which appear first, etc. Based on the process of ``stabilization'', we will also describe a heuristic zero-test and a heuristic method for the computation of dominant terms. In the algorithm for left composition with formal power series, we have already observed that the expanders >(z)> and >(z)> represent the same series. More generally, given a computable function :\\\> with (n)\\>, we define the operator >> by <\equation*> (sh> >)=|n\\(n)>>|>(n)>>|>>>>> In the case when (n)=k\\> is a constant function, we have <\equation*> (sh >)(z)=z*>(z). The shortening operator is typically used for the expansion of expressions which involve an expander >>, such that the expression size of >> tends to grow very rapidly with . For instance, we may prefer to compute a sum using the expander > >+>> instead of>+>>. Given a computable function :\\\>, the operator >> is defined by <\equation*> (le> >)=>(n)>. In the case when (n)=k\\> is a constant function, we have <\equation*> (le >)(z)=>(z)->*z-\->|z>. During the expansion of an expression, the lengthening operator may for instance be used in order to boost the precision of a subexpression. We may also use it as a substitute for the order parameter of a typical expansion command. , we would simply display f> in order to show an additional terms of . An even more interesting meta-operation is . Given a computable function :\\\>, we define it by <\eqnarray*> > f)>||>>,\,n>>=\\>,\,n>>>>*\>>|>,\,n>>||\supp >:>>=>>=\=>(n),\>}.>>>> The stabilization operator removes all terms from the expansion >> which are still subject to changes during the next (n)> approximations. Even for small values of \>, such as, we usually have <\equation> (stab >)\(stab >)\(stab >)\\, where <\equation*> \\\\(\-\)>=0. In particular, the successive approximations >)> usually only contain terms which occur in the final result . <\example> Let us reconsider the function from example. When approximating using >=stab >> instead of >>, we get: <\eqnarray*> |>>||>|>>||x|x*\>+>>>|>>||>(k=2,3,4,5)>>>> <\example> An example for which() is satisfied for any finite \> is the expander <\equation*> >(z)=|x>>-|x>>, which arises during the computation of <\equation*> f=>->*>. Indeed, the first terms of >> are given by <\eqnarray*> >>||>|>>||2>>>|>>||2>+>x4>>>|>>||2>+>x4>+x6>>>|>>||2>+>4>+>x6>+x8>>>|>>||2>+>4>+>x+x8>+x10>>>>> In this kind of situations, it may be necessary to consider more powerful stabilizations of the form \*n+\>>. In the case when () holds, we have <\equation> \=\ >)> for a sufficiently large value of . When taking and fixed, the formula() also provides us with a reasonable heuristic for the computation of > (which implies a zero-test for ). Of course, the values and should not be taken too small, so as to provide sufficient robustness. On the other hand, large values of and may lead to unacceptable computation times. Our current compromise has worked for all practical examples we have tried so far. <\remark> Our claim that relatively small values of and provide sufficient robustness may seem very surprising at first sight and is indeed remarkable feature which make meta-expansions so useful in our opinion. The intuitive justification lies in the fact that we expand in a really massive way all our operations and all our parameters. On the one hand, canceling terms usually change after every step before they vanish, and are thereby ``stabilized out''. On the other hand, deeper combinations of parameters which lead to a genuine non-canceling contribution can usually be detected after a few steps. In particular, small power series expressions with large valuations tend to be less harmful in our context. <\remark> Let > be the class of expanders which are obtained by applying our expansion algorithms to exp-log expressions. From a theoretical point of view, it might be interesting to investigate the existence of a simple computable function :\\\> such that, for any >\\>, there exists a with <\equation*> (stab> >)\(stab> >)\(stab> >)\\. Generalizing example, we see that we must take (n)\n>. Would (n)=n> be sufficient? Another application of the stabilization operator is printing. The default printing method of an expander >> might for instance be to print >)> for suitable values of and ( and ). This method can be further improved as follows: first compute =(stab >)> and =(stab >)> with \\>. When considering the successive terms of > in decreasing order for >>, we may decompose the expansion > in blocks <\equation> \=\+\+\+\+\+\+\, with \\>, \\-\>, \0,\,\\0> and \0,\,\\0>. In(), we now replace each non-zero > by the expression >)>, and print the result. For instance, if <\eqnarray*> >||+>+>+>+>>>|>||+>+>+>+>+>+*\>+>+>+>,>>>> then we print <\equation*> 1++>+O>+>+>+O*\>+>+O>. An interesting feature of this way of printing is that it allows us to see some of the remaining terms after the first > leading terms. In certain cases, such as <\equation*> 100>|1-x1>>=1++>+O>+>+>+O>, one might prefer to suppress some of these extra terms. One criterion for suppression could be the following: given the last term > of some > and any term > of >, suppress all terms \\> with \\*(\/\)> for some \>. If you are mainly interested in the first > terms of an expansion, then you may want to give early terms a higher priority during the computations. Given an expander >>, let >> denote the -th term in the polynomial >> (in decreasing order for>>). If is larger than the number of terms of >>, then we set >=0>. Now for each computable increasing function :\\\>, we define <\equation*> (bias> >)=(n,k)=l>>. In the case when (n,k)>, then we have <\eqnarray*> > >)(z)>||(z,z)>>|(u,z)>||>*u*z.>>>> We call >> a operator. We may typically apply it on auxiliary series >> with a sharp increase of the number of terms of >> with . More generally, it is possible to define operators which favour terms at the tail, in the middle, or close to a specified monomial. However, these generalization do not seem to have any practical applications, at first sight. Most of the algorithms described in this paper have been implemented inside the system. Below, we illustrate the implementation with a sample session. <\input| > use "numerix"; use "algebramix"; use "multimix"; use "symbolix"; <\input| > x == infinity ('x); <\input| > 1 / (x + 1) <\output> ->+>->+O>> <\input| > 1 / (x + log x + log log x) <\output> -x|x>-logx|x>+x|x>+x*loglogx|x>+logx|x>-x|x>-x*loglogx|x>-x*loglogx|x>-logx|x>+Ox|x>> <\input| > 1 / (1 + 1/x + 1/exp x) <\output> +>->+O>->+>-*\>+O*\>+>->+O*\>->+O>> <\input| > 1 / (1 + 1/x + 1/exp x) - 1 / (1 + 1/x) <\output> >+>-*\>+O*\>+>->+O*\>->+O>> <\input| > exp (x + exp (-exp x)) - exp (x) <\output> -x>>+-x>>+-x>>+O-x>>> <\input| > exp (exp (x) / (x + 1)) <\output> |x>-|x>+|x>-|x>+O|x>>> > <\input| > derive (exp (exp (x) / (x + 1)), x) <\output> |x>-|x>+|x>-|x>+O|x>+x>|x>-|x>-|x>+|x>-|x>+O|x>+x>|x>+|x>-|x>+|x>-|x>+O|x>+x>|x>-|x>-|x>+|x>-|x>+O|x>+x>|x>+O|x>-|x>+|x>-|x>+O|x>+x>|x>> <\input| > integrate (exp (x^2), x) <\output> >|2*x>+>|4*x>+>|8*x>+>|16*x>+O>|x>> <\input| > integrate (x^x, x) <\output> x>|logx>-x>|logx>+x>|logx>-x>|logx>+Ox>|logx>+x>|x*logx>-x>|x*logx>+x>|x*logx>+Ox>|x*logx>+x>|x*logx>-x>|x*logx>+Ox>|x*logx>+x>|x*logx>+Ox>|x*logx>> <\input| > sum (x^4, x) <\output> |5>-|2>+|3>-> <\input| > product (x, x) <\output> x-x-x|2>>+x-x-x|2>>|12*x>+Ox-x-x|2>>|x>> <\input| > lengthen (product (x, x), 8) <\output> x-x-x|2>>+x-x-x|2>>|12*x>+x-x-x|2>>|288*x>-x-x-x|2>>|51840*x>-x-x-x|2>>|2488320*x>+x-x-x|2>>|209018880*x>+Ox-x-x|2>>|x>> <\input| > product (log x, x) <\output> logx-x>-x>-x>+Ox>-logx|2>>+logx-x>-x>-x>+Ox>-logx|2>>|12*x*logx>+Ologx-x>-x>-x>+Ox>-logx|2>>|x*logx>> > <\input| > fixed_point (f :-\ log x + f @ (log x)) <\output> x+loglogx+logloglogx+loglogloglogx+Ologloglogloglogx> <\input| > la == derive (fixed_point (f :-\ log x + f @ (log x)), x) <\output> +x>+x*loglogx>+x*loglogx*logloglogx>+Ox*loglogx*logloglogx*loglogloglogx>> <\input| > mu == la * la + 2 * derive (la, x) <\output> >-*logx>+O*logx*loglogx>> <\input| > fixed_point (f :-\ 1/x + f @ (x^2) + f @ (x^x)) <\output> +>+>+>+O>+x>>+x>>+x>>+Ox>>+*logx>>+*logx>>+O*logx>>+*logx>>+O*logx>>+x*\x>>>+x*\x>>>+Ox*\x>>>+x*\x>>>+Ox*\x>>>+*logx*\*logx>>>+O*logx*\*logx>>>+x*\x*\x>+x*logx>>>+Ox*\x*\x>+x*logx>>>> > Even though the expansion algorithms developed so far are usually sufficient for applications, they lack robustness in several ways. First of all, we have used heuristic algorithms for zero-testing and the computation of dominant terms. Some of our algorithms crucially depend on the correctness of these heuristic algorithms. For instance, our algorithm for the computation of an inverse > yields an erroneous result if > is computed incorrectly. Finally, expanders >)> only asymptotically tend to . Even if we know that a given monomial \\> is in the support of , we do not know how large should be in order to guarantee that >=>>>. In this section, we describe a few ideas which may be used to increase the robustness of our algorithms. The strategy of can be used to reduce the impact of incorrect answers of heuristic algorithms. For instance, in order to invert a series , we started with the computation of >. Instead, we might imagine that the expander of 1>> is allowed to adjust its initial value of > at later stages of the approximation. More precisely, for \> and a suitable >, let > be the dominant term of > g)> and consider the expander <\equation*> |\>=1>*1>*g-1)>>|\\0>>||>>>>> Then we may define a new expander >> by taking the diagonal <\equation*> >=|\>. In order to have 1>>, it now suffices to have =\> (n>), instead of =\>. Of course, in nasty cases, it might still happen that \\> for all . In other words, the strategy of auto-correction produces no miracles and does not substitute for a genuine zero-test. Even in the case when the stabilization operator >> is sufficiently powerful to guarantee > for a certain class of expanders, one should keep in mind that the result only becomes correct at the limit; we still don't know how many terms need to be computed. \ From apractical point, the strategy of auto-correction is easy to implement on the series level from section: in our example of inversion, the expander >> may simply keep|\>> in memory for the largest considered so far and only update its value when > changes. Implementations become more involved when considering recursive transseries expansions, as in section. Indeed, in this more general setting, we also need to correct erroneous outcomes for the asymptotic ordering >>, which recursively relies on the orderings >> and>> for expanders of lower exponential height. In order to generalize the idea, one thus has to define sequences of approximations >> and >> for >> and >>, and systematically work with the relations>> and >> when making decisions for expansions at stage . In the case when we are interested in expansions of functions inside a given class >, it sometimes happens that > admits a zero-test. For instance, if > is the class of exp-log functions, then a zero-test can be given whose correctness relies on Schanuel's conjecture. An interesting question is whether we can use a zero-test in > in order to design a non-heuristic algorithm for the computation of dominant terms. In order to make this work, we have to be able to detect infinite cancellations of terms which occur in expressions such as 1>+\x>)-log(1+x1>)>. A general mechanism for doing this is to refine the mechanism of expanders by indexing over a suitable well-based set. More precisely, given an abstract well-based set > ( a set > which is well-quasi-ordered for the opposite ordering of >>), we define a sequence ,\,\> of finite subsets of> by =max>\\(\\\\\)>. In general, the sequence >)> is transfinite, but if we have =\>\>, then we say that > is . We say that > is , if for any finite subset \\>, we can compute >{\\\:\\\\,\\\}>. In particular, this implies the sequence \> to be computable. A is a computable well-based family >=(>>)\\>\C[\]>>, indexed by a computable accessible well-based set>. A well-based expander is the natural refinement of an expander )> in the usual sense, by regrouping terms >=>>=\\>>>>. We say that >> is a if each >>> is of the form >>=c>*\>\R*\> and the mapping \\>> is increasing. Notice that >> is automatically well-based if \\>> is increasing. Recall that the generated by a finite subset \\> is defined by )={\\\:\\\\,\\\}>. Now consider a termwise well-based expander >>)\\>> such that >)>\\> for any finite subset \\>. If the mapping \>)>> is effective, then we call >> an >>. For the operations we have considered in this paper, it should be possible to replace the usual notion of expander by expanders of > (assuming that > is stable under the operation). This was already shown in for the basic operations from section and still waits to be worked out for the other ones. Given an expander >>)\\>> over >, the zero-test in > may now be used in order to compute the set =max> supp f> of dominant monomials of . The algorithm again goes back to: <\enumerate> Let \max> \>. Replace > by a minimal subset > with >)>=>)>>. Let \max>{\>>>:\\\}> and >\{\\\:\>>>=\}> for each \\>. If there exists an \\> with >>>=0>, then set <\equation*> \\max>(\\\>)\max> {\\\:\\\\>,\\\}, and go to step 2. Return >. Our notion of approximable well-based series is the natural counterpart of the concept of approximable real numbers: a real number is said to be if there exists a computable sequence >:\\\>, which converges to . Astronger and more robust notion is the one of computable real numbers: we say that \> is , if there exists a computable function >:\>\\> which takes \\>> on input and produces an approximation =>(\)\\> with -x\|\\>. It is natural to search for an analogue notion of computable well-based series. There are really two aspects to a computable well-based series . On the one hand, we should be able to compute its coefficient >> for any monomial >. On the other hand, its support should be sufficiently effective. In the case of ordinary power series R[[z]]>, the second issue does not really arise, because the support is necessarily included in the well-based set >>. In general, one might require that is given by a termwise well-based expander, which yields quite a lot of information about . As to the computation of coefficients >>, consider the case of a product , where > is totally ordered and and are given by +\+\> and +\+\> with \\\\> and \\\\>. Given \\>, we hit the problem that we don't have any information on the asymptotic behaviour of the > and the >. In order to design an algorithm for the computation of >>, we need more control over this asymptotic behaviour. In the grid-based setting, we are really computing with multivariate power series, and no real difficulties arise. In the well-based setting, things get more involved. When we restrict our attention to transseries, there are ways to represent and compute with . A monomial cut is the analogue of aDedekind cut for the set of transmonomials instead of >. Given a termwise well-based expander >>)\\>>, any initial segment )> naturally induces a transseries >)>> and a monomial cut, called the width of >)>>. For a fully satisfactory definition of computable well-based series, one should be able to compute these widths. However, we have not investigated this matter in detail yet. <\bibliography|bib|alpha|all> <\bib-list|RSSvdH96> B.I. Dahn and P.Göring. Notes on exponential-logarithmic terms. , 127:45--50, 1986. J.Écalle. . Hermann, collection: Actualités mathématiques, 1992. G.H. Gonnet and D.Gruntz. Limit computation in computer algebra. Technical Report 187, ETH, Zürich, 1992. H.Hahn. Über die nichtarchimedischen Gröÿensysteme. , 116:601--655, 1907. G.H. Hardy. . Cambridge Univ. Press, 1910. G.H. Hardy. Properties of logarithmico-exponential functions. , 10(2):54--90, 1911. G.Higman. Ordering by divisibility in abstract algebras. , 2:326--336, 1952. E.C. Milner. Basic wqo- and bqo- theory. In Rival, editor, , pages 487--502. D. Reidel Publ. Comp., 1985. A.Macintyre, D.Marker, and L.vanden Dries. Logarithmic-exponential power series. , 56(2):417--434, 1997. A.Macintyre, D.Marker, and L.vanden Dries. Logarithmic exponential series. , 1999. To appear. M.Pouzet. Applications of well quasi-ordering and better quasi-ordering. In Rival, editor, , pages 503--519. D. Reidel Publ. Comp., 1985. W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. . Cambridge University Press, 3rd edition, 2007. D.Richardson. How to recognise zero. , 24:627--645, 1997. D.Richardson. Zero tests for constants in simple scientific computation. , 1(1):21--37, 2007. D.Richardson, B.Salvy, J.Shackell, and J.vander Hoeven. Expansions of exp-log functions. In Y.N. Lakhsman, editor, , pages 309--313, Zürich, Switzerland, July 1996. B.Salvy. . PhD thesis, École Polytechnique, France, 1991. M.C. Schmeling. . PhD thesis, Université Paris-VII, 2001. J.Shackell. Growth estimates for exp-log functions. , 10:611--632, 1990. J.Shackell. Inverses of Hardy L-functions. , 25:150--156, 1993. J.vander Hoeven. Outils effectifs en asymptotique et applications. Technical Report LIX/RR/94/09, LIX, École polytechnique, France, 1994. J.vander Hoeven. . PhD thesis, École polytechnique, Palaiseau, France, 1997. J.vander Hoeven. Generic asymptotic expansions. , 9(1):25--44, 1998. J.vander Hoeven. Operators on generalized power series. , 45(4):1161--1190, 2001. J.vander Hoeven. Relax, but don't be too lazy. , 34:479--542, 2002. J.vander Hoeven et al. Mathemagix, 2002. . J.vander Hoeven. Algorithms for asymptotic interpolation. Technical Report 2006-12, Univ. Paris-Sud, 2006. Submitted to JSC. J.vander Hoeven. Counterexamples to witness conjectures. , 41:959--963, 2006. J.vander Hoeven. , volume 1888 of . Springer-Verlag, 2006. J.vander Hoeven. On effective analytic continuation. , 1(1):111--175, 2007. J.vander Hoeven and J.R. Shackell. Complexity bounds for zero-test algorithms. , 41:1004--1020, 2006. <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Har10 Har11 Sh:Har vdH:phd MMV97 DG86 Ec92 vdH:ln Sh90 GoGr92 vdH:issac96 Sal:phd vdH:phd Rich97 vdH:genae vdH:zerotest Rich07 PTVF07 vdH:interpolate vdH:mmx vdH:94a vdH:ln vdH:phd Ec92 MMV97 MMV99 Pou85 Mil85 Hahn1907 Hig52 vdH:ln DG86 vdH:phd Schm01 vdH:noeth vdH:ln vdH:riemann vdH:relax vdH:noeth vdH:ln Rich97 vdH:issac96 vdH:issac96 vdH:phd vdH:ln vdH:ln vdH:phd vdH:ln vdH:interpolate vdH:cexconj vdH:mmx vdH:genae Rich97 vdH:94a vdH:94a vdH:ln <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Transseries> |.>>>>|> |math-font-series||font-shape||3.Meta-expansions> |.>>>>|> |Constructor |.>>>>|> > |Addition |.>>>>|> > |Multiplication |.>>>>|> > |Left composition with power series |.>>>>|> > |Inversion |.>>>>|> > |Fixed points |.>>>>|> > |math-font-series||font-shape||4.Meta-expansion of transseries> |.>>>>|> |Restriction of support |.>>>>|> > |Logarithm and exponentiation |.>>>>|> > |math-font-series||font-shape||5.Other operations on transseries> |.>>>>|> |Extension by strong linearity |.>>>>|> > |Differentiation |.>>>>|> > |Composition |.>>>>|> > |Trace of the distinguished integration |.>>>>|> > |Distinguished integration |.>>>>|> > |Flat discrete summation |.>>>>|> > |Moderate discrete summation |.>>>>|> > |Steep discrete summation |.>>>>|> > |math-font-series||font-shape||6.Meta-operations on expanders> |.>>>>|> |Shortening |.>>>>|> > |Lengthening |.>>>>|> > |Stabilization |.>>>>|> > |Dominant terms |.>>>>|> > |Printing |.>>>>|> > |Dominant bias |.>>>>|> > |math-font-series||font-shape||7.Expansion gallery> |.>>>>|> |7.1.Exp-log functions |.>>>>|> > |7.2.Calculus |.>>>>|> > |7.3.Functional equations |.>>>>|> > |math-font-series||font-shape||8.Towards more robustness> |.>>>>|> |Auto-correction |.>>>>|> > |Reduction to zero-testing |.>>>>|> > |Computable well-based series |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|>