> <\body> <\expand|make-title> <\address> de Mathématiques ( 425) Université Paris-Sud 91405 Orsay Cedex France Email: > <\abstract> In this paper, we present a new zero-test for expressions which are constructed from formal power solutions to algebraic differential equations using the ring operations and differentiation. We also provide a survey of all existing methods that we know of and a detailed comparison of these methods with our approach. >>\ >> Zero-testing is an important issue on the analysis side of symbolic computation. Standard mathematical notation provides a way of representing many transcendental functions. However, trivial cases apart, this notation gives rise to the following problems: <\itemize> Expressions may not be defined: consider , or -e*e)>. Expressions may be ambiguous: what values should we take for )> or |>>? Expressions may be redundant: x+cos x> and are different expressions, but they represent the same function. Often, one is interested in expressions which represent functions in a ring. In that case, the third problem reduces to deciding when a given expression represents the zero function. As to the first two problems, one has to decide where and how we want our functions to be defined. In this paper, we will be concerned with expressions that represent formal power series (in fact, this approach covers most elementary calculus on special functions, using analytic continuation if necessary). The expressions will then be formed from the constants and the indeterminates using the ring operations and power series solutions to algebraic differential equations. The correctness and non-ambiguity of expressions may be ensured by structural induction. This may involve zero-testing for the series represented by subexpressions. Several classical approaches for zero-testing exist and we provide a quick survey of them in section . Our new zero-test, which is described in section , is based on a similar approach as . We believe the algorithm to be interesting for five main reasons: <\itemize> We treat differential equations of arbitrary order. Our method accommodates divergent power series solutions. It reformulates previous work from in the more standard setting of differential algebra. We believe it to be more efficient. With some more work, it might be possible to give complexity bounds for the algorithm (or a modified version of it) along the same lines as . Such bounds are also interesting in relation to ``witness conjectures'' . On the longer run, the algorithm might generalize to the multivariate setting of partial differential equations with initial conditions on a subspace of dimension 0>. Throughout the paper, we will assume that the reader is familiar with differential algebra and the notations used in this field; see section and for a nice introduction. The proof of our algorithm also uses a result from the preprint , which is too complicated to be presented here, although we do provide a sketch of the proof in section . We plan to provide some examples and more explanations in a forthcoming journal paper. We are also writing a lecture note about the subject of section 4. No implementations are available yet. A is a power series which satisfies a non-trivial algebraic differential equation . Consider a power series expression constructed from and the constants in some field > like >, using > and left composition of infinitesimal power series by differentially algebraic power series ,\,\>. Then it is a classical problem to test whether such an expression represents zero. There are many approaches for this problem. If the differentially algebraic power series are particularly simple, then it is sometimes possible to characterize all possible relations between the power series under consideration. This is clearly the case if we restrict the differentially algebraic power series to be algebraic. A more interesting example is obtained when we also allow left composition with and z>. In this case, the Ax theorem and the Risch structure theorem may be used to design a fast zero-test. The structural approach may yield very efficient algorithms when it works. However, it requires the characterization of all possible relations in a given context, where we merely asked for a test whether a particular one holds. Consequently, the approach usually only applies in very specific situations. An obvious way to test whether an expression represents the zero power series is to obtain a bound for its valuation in terms of its size if the expression does represent zero. Khovanskii has given reasonably good uniform bounds (of the form >)>, where denotes the input size) for the number of zeros for systems of real Pfaffian functions . These bounds may be adapted to the power series context. This approach is interesting because it only requires fast power series expansions for implementing a zero-test. However, such a zero-test might be slow for expressions which can be quickly rewritten to zero (like , where is a complicated expression). Also, if we want the approach to be efficient, good bounds (such as the ones predicted by witness conjectures ) would be necessary. At the moment, we only have Khovanskii-type bounds in the case of Pfaffian functions. A new strategy for obtaining bounds, which might generalize to higher order equations by adapting the algorithm in this paper, has been proposed in . However, the obtained bounds are still doubly exponential. From a theoretical point, it is also interesting to ask whether zero-tests always exist. This question has been answered very precisely and in a very general context by Denef and Lipschitz . In the present setting of power series expressions, their approach uses the well-known fact that the set of differentially algebraic power series is closed under the ring operations and composition. However, the equations one obtains for sums, products, may be very complicated, so that that approaches which essentially use this fact are deemed to be very inefficient. Another simple approach was proposed by Shackell in (see the first algorithm). The idea is to keep all algebraic relations which hold between a finite number of power series in a Groebner basis . If we want to test a new relation, then we include it in and look whether we obtain a contradiction. If not, then we keep on adding the derivatives of all relations in into . Under certain hypotheses, this process always ends up in a contradiction or a proof that the added relations all hold. However, the approach does not seem to provide any reasonable complexity bounds. Yet another interesting approach to the zero-test problem is to change our point of view. The differentially algebraic power series > at the top of this section are usually specified by a finite number of algebraic differential equations and initial conditions. Now instead of asking whether a given expression represents zero, we may ask for which initial conditions for ,\,\> the expression represents zero. It turns out that the set of such initial conditions is a closed algebraic set . The ``difficult'' cases in the zero-test correspond to the situation in which the original initial conditions are in the closure of an open subset of where the answer is ``easier''. It would be interesting to investigate whether this approach of varying the initial conditions may yield a better power series analogue for Khovanskii's results. A present disadvantage of the method is that it only applies in the convergent case and that it is not yet clear how to obtain complexity bounds. The approach in this paper is similar to the algorithm in . A better understanding (and a complexity analysis) of this algorithm were obtained in . In order to explain the underlying idea behind the present algorithm, let us assume for simplicity that and that > satisfies the algebraic differential equation . Now suppose that we want to test whether for a second algebraic differential polynomial . Then we first use differential algebra to determine a third equation 0> which is equivalent to and under certain non-degeneracy conditions. Now we consider as an indeterminate and try to solve 0> in a suitable differential overfield > of [[z]]>. This field > consists of so called logarithmic transseries and has the nice property that 0> still has a unique solution in > for the initial conditions of . Hence, if and only if 0> admits a solution > in >, in which case we have =f>. The approach has the advantages that it accommodates divergent differentially algebraic power series ,\<\ ldots\>,\> and that the degeneracy of the initial conditions is not amplified during the resolution process. We also have a good hope to obtain complexity bounds along the same lines as in and some hope to generalize the approach to the multivariate setting. We finally expect the approach to be one of the most efficient ones in practice, although no implementations are available yet. Let > be an of characteristic . This means that all elements of > can be represented effectively and that there are algorithms for performing the field operations and testing equality (or, equivalently, for zero-testing). Let > be an ( the differentiation is effective too). We assume that the elements of > are formal power series in [[z]]>, that \\ C[z]>, and that the differentiation > on > corresponds to the differentiation =z*\/\ z> on [[z]]>. Moreover, we will assume that > is an , there exists an algorithm which takes > and \> on input and which computes the coefficient \> of > in . Notice that this implies the existence of an algorithm to compute the valuation of >. Now consider a non zero differential polynomial [F,\,F]\{F}> of order (recall that =\ F>) and a power series solution [[z]]> to . We will assume that is not a multiple solution, Q|\ F> (f)\0> for some {0,\,r}> (if is a multiple solution, then we may always replace by a non-zero Q|\ F>> and continue doing this until is no longer a multiple solution). Choose such that the valuation of Q|\ F> (f)> is minimal, say . Then modulo a transformation of the form <\expand|equation*> f\f+\+f*z+*z and division of the equation by a suitable power of , we may also assume that <\equation> Q=L F+z*M, where [\]> and {F}>. Let \[k]> be the polynomial we get from when reinterpreting > as an indeterminate . Then () yields a recurrence relation for all but a finite number of coefficients of : <\equation> f=-(k)>*(M(f)). Indeed, the only for which this relation does not hold are roots of >. There are at most such and they correspond to the initial conditions for . Let be the largest root of > in > (or 1> if such a root does not exist). Then we notice in particular that is the unique solution to whose first coefficients are ,\,f>. In what follows, we will show that the differential ring {f}> is again an effective power series domain. Now elements in {f}> can naturally be represented as the images of differential polynomials in {F}> under the substitution f>. It therefore suffices to design an algorithm to test whether for a given differential polynomial {F}>. Our algorithm is based on Ritt reduction and the resolution of algebraic equation in more general rings of formal power series. We will use standard notations from differential algebra: <\itemize> > denotes the and > denotes the of a differential polynomial . The of {F}> is given by P=(r,d)\\\ >, where is the order of and > P> the degree of in >. Notice that the set > of possible ranks is well-ordered the lexicographical ordering. We will write > for )>. Given {F}>, we denote by the of with respect to . We thus have a relation <\expand|equation*> I>*H>*A=X*B+(A rem B), where ,\\\>, {F}> and rank B>. <\remark> At a first glance, our setting may seem less general than the one in the beginning of section . However, since we will prove that {f}> is again an effective power series domain, we may repeat the construction after replacing > by {f}>, and add as many other functions ,\,f> as we like. In fact, it suffices to add one > for each new subexpression of the form \g\ >. It is well known that any non-trivial algebraic equation with coefficients in [[z]]> has a solution in the field >>> of Puiseux series over the algebraic closure > of >. We will sketch the proof of an analogous result in the case of algebraic differential equations. For a full proof (of a more general result) we refer to . In order to solve equations of the form f=1>, it is clear that solutions to such equations might involve logarithms. In fact, they may even involve iterated logarithms. Let > be the totally ordered group of with powers in >. More precisely, the elements of > are monomials <\equation> \=z>*(log z)>*\*(log z)>, where ,\,\\\<\ bbb-Q\>> and > stands for the -th iterated logarithm. Given such a monomial, we write \1> if and only if \0>, where denotes the least with \0> in (). This defines a total ordering >> on >. The asymptotic relation \\> corresponds to writing =o(\)> as 0> in analysis. A subset \\> is said to be , if there exist monomials \1,\,\\1> and > in >, such that \\>*\<\ cdots\>*\>*\>. A is a mapping \> with grid-based support. We will usually write such series using the infinite sum notation \ \\>f>*\> and we denote the set of all logarithmic transseries by =>>. Since the support of each non-zero \> is grid-based (whence well-ordered), this support admits a >>-maximal element > which is called the of . It can be shown that >> is a field for the operations <\expand|eqnarray*> ||\\ \>(f>+g>)*\;>>|||\\>=\*\>f>*g\ >*\.>>>> In the second formula, the grid-based support property ensures that =\*\>f>*g>> is a finite sum. There also exists a natural derivation > on >>, which sends each monomial \\> of the form () to <\equation> \ \=>+|log z>+\+|log z*\*log z>*\. This derivation extends to the whole of > by (infinitary) ``strong linearity'' . Before proving that solutions to algebraic differential equations with coefficients in > always exist, we first observe that we have the following uniqueness result: <\lemma> Let {F}> be a differential polynomial of the form )>, let [[z]]> be a solution to and let be defined as in section . Then the equation )=0> with side condition -f\z> admits as its unique solution in >. <\proof> Each series in > may be expanded as a Puiseux series in <\equation> f=\>f*z, where the coefficients > are series in >> and =(log z)>*(log log z)>*\>. Notice that we may interpret > as the lexicographical product of >> and >. For the expansion (), the recurrence relation () still determines the coefficients of in a unique way for all s>. A classical way to solve algebraic equations over power series is to use the Newton polygon method. We have generalized this method to algebraic differential equations. In fact, it is more convenient to solve of the form <\equation> P(f)=0(f\\), where \{F}> and \\>. In the sequel, we will assume that > is algebraically closed. In order to solve (), we start by determining all possible dominant monomials \\> of non-zero solutions and their corresponding coefficients. Actually, it is convenient to characterize such first. It suffices to characterize when is a potential dominant monomial: we will then say that > is a potential dominant monomial, if is a potential dominant monomial for the equation <\expand|equation*> P\>(f)=0(f\\/\). Here \>> denotes the unique differential polynomial in {F}> with \>(f)=P(f*\)> for all . Write >P>*F)>> using multi-indices > and let =max>\>>>. Then the of is defined to be the scalar differential polynomial <\expand|equation*> \=>P,\>*F)> in {F}>, where ,\>=(P>)>>. We also define the dominant part \>{F}> of by <\expand|equation*> \=>P,\>*F)>, where > is the valuation of in and ,\>> denotes the coefficient of >> in >>. Assume first that \[F]*(\ F)>> and =\>. Then we define the of by =\> and we have <\expand|equation*> P(c+\)-N(c)\\ for all > and \1>. Here \\>, if /\=z>*\*\ (log z)>> with \0>. We say that is a potential dominant monomial of a solution to if and only if > admits such a non-zero constant root >> (and is a potential dominant term). Furthermore, > admits a non-zero root if and only if \>, because \[F]*(\*F)>> and > is algebraically closed. If \\> or \[F]*(\ F)>>, then we use the technique of ``upward shifting''. Given >{F}>, we define > to be the unique differential polynomial in {F}> such that <\expand|equation*> A\(f\e)=A(f)\(e) for all . For instance, F-1)\=-z*\ F-1>, and we notice that the logarithmic solution of f=1> transforms to the non-logarithmic solution > of f=1> under upward shifting f\=f\e>. Now we proved in that after a finite number of replacements \\> we obtain a differential polynomial > with >=\;z>> and >\[F]*(\ F)>>. We say that is a potential dominant monomial () if and only if \> and \N>> admits a non-zero root in >>. It is clear that if is a solution to (), then > must be a potential dominant monomial of a solution. We say that a potential dominant monomial \\> is , if \>>> is not homogeneous ( \>>\*(\ F)>>). These classical potential dominant monomials are finite in number and they can be determined from something which resembles the Newton polygon in the algebraic case , by using a succession of multiplicative conjugations Pz>>> and upward shiftings \\> of the dominant parts . Once we have found a potential dominant term =\> of a solution to (), we may consider the <\equation> f=\+(\|~>)\ . In other words, a refinement is a change of variables together with the imposition of a new asymptotic constraint. It transforms () into a new asymptotic differential equation <\equation> P>()=0(\|~>). Using the possibly transfinite process of determining \ potential dominant terms and making refinements, one finds all solutions to (). However, a more careful study is required to ensure that one remains in the context of grid-based transseries and that (for instance) no transseries like <\equation> log z+log z+log z+\ may occur as solutions of (). In order to do this, it is convenient to associate an invariant to the equation (): the highest possible degree of the Newton polynomial \ \>>> that we can achieve for a monomial \\> is called the of () and we denote it by \> P>. In the algebraic case, the Newton degree measures the number of solutions to the asymptotic equation (), when counting with multiplicities. In the differential case, it only gives a lower bound (see theorem below). Also, an equation of Newton degree does not admit any solutions. Now we have shown in that the Newton degree decreases during refinements and that quasi-linear equations ( equations of Newton degree ) always admit solutions. Finally, in the case when |~>> P>=deg\> P\2>, it is possible to replace > by a solution to a quasi-linear equation of the form <\equation> +\+\> P|(\ F)>*\*(\ F)>> (\)=0(\\\), and force the Newton degree to strictly decrease after a finite number of steps. In other words, the transfinite resolution process has been replaced by an essentially finite algorithm, which avoids solutions of the form (). In particular, these methods yield the following theorem: <\theorem> Consider an asymptotic algebraic differential equation )> of Newton degree 0> over >. Then )> admits at least solutions in > when counting with multiplicities. In this sequel, we assume that , and are as in section . We will give an algorithm to test whether for given {F}>. We will write 0> if and only . <\algorithm|> >0 : a differential polynomial {F}> > if and only if 0> <\body> [Initialize] <\indent> 1> P> > [Reduction] <\indent> <\indent> > \0> R-I*V> \0> I*H, R\R rem S>> 0> I*S*H, R\Q rem R>> I*S*H, reducing\> [Final test] <\indent> let be minimal with z> H+\+f*z>=0> max{k,s}> z> R+\+f*z>\0> <\remark> In the particular case when an asymptotic differential equation () has power series coefficients in [[z]]> and =z>\ , its Newton degree z> P> is the minimal degree of a term z,\>*f)>> in z>> with z,\>>=\z>>>. In particular, the minimal in step 3 can be found by expanding the power series coefficients >(f)> of in using any fast expansion algorithm for solutions to differential equations . <\theorem> The above algorithm for testing whether 0> terminates and is correct. <\proof> In the loop in step 2, we notice that the rank of strictly decreases at each iteration. Also, the rank of > (or >) in each recursive call of the zero-test is strictly smaller than the rank of (and whence the rank of ). These two observations, and the fact that the set of possible ranks is well-ordered, imply the termination of the algorithm; the existence of a minimal with z> H+\+f*z>=0> will be proved below. As to the correctness of the algorithm, we claim that at the start and the end inside the loop in step 2, we maintain the properties that 0> and the existence of a relation of the form <\equation> H>*P=A*R+B for \\> and differential polynomials and with 0>. Indeed, we have and at the first entry. If \0> and =R-I*V>, then we have >*P=A*R+B=A*+(B+A*I*V)>, with *V\0>. If \0>, =I*H> and =R rem S>, then >>*R=X\ *S+> for some \\> and differential polynomial . Also, >=d*I>, where 2> is the degree of in the highest > occurring in . Consequently, denoting |~>=max(\,\)>, we have |~>>*P=I|~>>*H|~>>*P=I|~>>*H|~>-\>*R+I|~>>*H|~>-\>*B=d|~>>*I|~>-\>*H|~>-\>*+(d|~>>*I|~>-\>*H|~>-\>*X*S\ +I|~>>*H|~>-\>*B)>. The case when 0> is treated in a similar way. This proves our claim; notice that () implies 0\R\0>. By definition, we also have R\0> if > at the first test in the loop. Let us now assume that the algorithm reaches step 3. Since 0>, we may write *z+O(z)> with \0> for some \>. For this , we have z> H+\+f*z>=0>, which implies that there exists a minimal number , such that both z> H+\+f*z>=0> and s>. If z> R+\+f*z>=0>, then we have R(f+\+f*z)\0>, whence 0> and 0>. Conversely, assume that z> R+\+f*z>\0>. Then theorem implies the existence of a series \C>> with )=0> and -f\\ z>. Since and *S\|H>, we have a relation of the form <\equation> H>*Q=X*R+\+X*R, where \\> and ,\,X> are differential polynomials. Now z> H+\+f*z>=0> implies )\0>, so that )=0>. But, by lemma , there exists a unique solution in >> to the equation )=0> with the side condition -f\z>. Hence =f>, and 0>. <\bibliography|bib|acm|zerotest> On Schanuel's conjecture. (1971), 252--268. . PhD thesis, University of Lille I, 1994. Fast algorithms for manipulating formal power series. (1978), 581--595. Power series solutions of algebraic differential equations. (1984), 213--238. Decision problems for differential equations. , 3 (1989), 941--950. . American Mathematical Society, Providence, RI, 1991. Testing identities of series defined by algebraic partial differential equations. In (1995), G. Cohen, M. Giusti, and T. Mora, Eds., Springer-Verlag, pp. 393--407. Proceedings of the 11th International Symposium, AAECC-11, Paris, France, July 1995. The uniformity conjecture. In (2001), vol. 2064, Springer Verlag, pp. 253--272. Algebraic properties of elementary functions in analysis. , 101 (1975), 743--759. A differential-equations approach to functional equivalence. In (Portland, Oregon, 1989), G. Gonnet, Ed., A.C.M. Press, pp. 7--10. Zero-equivalence in function fields defined by algebraic differential equations. (1993), 151--172. Complexity bounds for zero-test algorithms. Tech. Rep. 2001-63, Prépublications d'Orsay, 2001. . PhD thesis, École polytechnique, France, 1997. Relax, but don't be too lazy. Tech. Rep. 78, Prépublications d'Orsay, 1999. Submitted to JSC. Complex transseries solutions to algebraic differential equations. Tech. Rep. 2001-34, Univ. d'Orsay, 2001. Fast evaluation of holonomic functions near and in singularities. (2001), 717--743. Zero-testing, witness conjectures and differential diophantine approximation. Tech. Rep. 2001-62, Prépublications d'Orsay, 2001. <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Ris75 Khov91 Sh3 Sh4 Peladan95 ShVdH01 Sh3 Sh4 ShVdH01 Sh3 Sh4 ShVdH01 ShVdH01 vdH:witness vdH:phd vdH:singhol Rich01 Boul94 vdH:osc Ax71 Ris75 Khov91 BK78 vdH:relax vdH:witness vdH:phd vdH:singhol Rich01 ShVdH01 DL84 DL89 Sh4 Peladan95 Sh3 ShVdH01 ShVdH01 vdH:osc vdH:phd vdH:phd vdH:osc vdH:phd vdH:osc vdH:phd vdH:osc BK78 vdH:relax <\associate|toc> |math font series||1Introduction> |math font series||2A survey of the existing approaches> |font size||Structural approaches.> |font size||Bounding the valuation.> |font size||The logical approach.> |font size||Groebner bases and saturation.> |font size||Varying the initial conditions.> |font size||The generalized solution approach.> |math font series||3The effective setup> |math font series||4Logarithmic transseries solutions to algebraic differential equations> 4.1Logarithmic transseries 4.2Asymptotic differential equations |math font series||5The algorithm> 5.1Statement of the algorithm 5.2Correctness and termination proof |math font series||Bibliography>