.
This work has been supported by the ANR10BLAN 0109 LEDA project.

Abstract
In this paper, we will present several algorithms for computing with Dalgebraic power series. Such power series are specified by one or more algebraic differential equations and a sufficient number of initial conditions. The emphasis is not on the efficient computation of coefficients of such power series (various techniques are known for that), but rather on the ability to decide whether expressions involving Dalgebraic power series are zero. We will both consider univariate and multivariate series and, besides the usual ring operations and differentiation, we will also consider composition, implicitly determined power series and monomial transformations.
Keywords: Dalgebraic power series, algorithm, zerotest, implicit function
Let be a field of characteristic zero. A power series is said to be Dalgebraic if it satisfies a non trivial differential equation , where is a polynomial with coefficients in . The set of Dalgebraic power series contains many classical transcendental functions, such as , , , etc., and it is closed under the ring operations, restricted division, differentiation and composition. This makes the differential ring of Dalgebraic power series suitable as a framework for exact computations with mathematical expressions which involve transcendental functions.
The notion of Dalgebraic power series admits a straightforward generalization to the multivariate context. In this case, we require the satisfaction of a non trivial algebraic differential equation with respect to each of the partial derivatives. The multivariate context allows for some additional operations, such as the resolution of implicit power series equations and general monomial transformations with rational powers. Again, the set of Dalgebraic power series is stable under such operations.
There are two main aspects about computations with formal power series. On the one hand, we need fast algorithms for the computation of coefficients. There is an important literature on this subject and the asymptotically fastest methods either rely on Newton's method [2, 1, 9] or on relaxed power series evaluation [8, 6, 10].
On the other hand, there is the problem of deciding whether a given power series is zero. This problem is hard in the sense that we need to check the cancellation of an infinite number of coefficients. Therefore, a related question is how to represent power series in such a way that we can design such zero tests. We also notice the asymmetric aspect of the problem: given a non zero series , it is usually easy to prove that : it suffices to compute a non zero coefficient. However, if vanishes, then it is potentially difficult to establish a formal proof of this fact.
In this paper, we will focus on the second aspect. We will consider various representations for Dalgebraic power series, show how to perform common operations on power series when using these representations, and also present several zero tests. All representations are based on a combination of differential equations satisfied by the power series and initial conditions. However, depending on additional properties of these equations, some representations are more suitable for performing common operations and zero testing.
For global computations with algebraic differential equations, it is convenient to use the classical framework of differential algebra [17, 14]. In addition, we need some technology in order to deal with initial conditions. One key ingredient is the determination of the number of initial conditions which are needed in order to guarantee that a power series solution of a system of differential equations is unique. For this, we will use a similar technique as the one introduced by Denef and Lipshitz in [4, 5], and develop this technique in further detail.
Apart from a first section 2 with some reminders from differential algebra, the paper is subdivided into three main parts. In section 3, we first focus on the univariate case and the representation of a Dalgebraic series by a single differential polynomial which annihilates together with a sufficient number of initial conditions. In section 4, we remain in the univariate setting, but switch to more flexible representations of Dalgebraic series as solutions to systems of algebraic differential equations with sufficiently many initial conditions. In section 5, we generalize our results to the multivariate setting and also consider the additional operations of solving implicit equations, composition, and monomial transformations.
In section 3, we effectively represent Dalgebraic power series by a pair , where is a computable power series (meaning that the function is computable) and a non zero differential polynomial such that . Specializing a more general result from [4, 5], we will show how to compute a number (called a root separation bound for at ) with the property that the equation admits no other solutions with . Moreover, if is a “non degenerate root” of (in the sense that , where is a simpler non zero differential polynomial, called the separant of ), then we actually obtain an explicit recurrence relation for in terms of for .
In section 3.3, we will exploit the existence of such a recurrence relation in the non degenerate case, by giving a first zero test for series in the differential field generated by . We will next strengthen the root separation bound by not only looking for other solutions of in , but also in . In section 3.4, this allows us to simplify the zero test (along similar lines as in [7]) and also widen its scope to power series which depend on a finite number of parameters (Remark 10). We finally consider the case when is ill specified as a degenerate root of . In sections 3.5 and 3.6, we give algorithms for computing root separation bounds in this case, as well as non degenerate annihilators.
In principle, annihilators of complex Dalgebraic series (such as large expressions in other Dalgebraic series) can be computed using brute force (Proposition 14). However, this technique is deemed to be very inefficient. For this reason, we introduce the more flexible framework of Ddomains in section 4. In this framework, we express Dalgebraic series as rational functions in a finite number of Dalgebraic series which satisfy a special kind of system of algebraic differential equations with initial conditions. We will show how to adopt the major results from section 3 to this setting.
In section 5, we turn our attention to multivariate Dalgebraic series. We will start by showing how to interpret a multivariate Dalgebraic series in as a Dalgebraic series in whose coefficients are Dalgebraic series in . We next generalize the notion of a Ddomain and show that the above reduction to the univariate case can be done at the level of Ddomains. We conclude by giving some algorithms for some typical multivariate operations: the resolution of implicit equations, composition, and monomial transformations. In each of these cases, we will show how to avoid the computation of differential annihilators as much as possible, by remaining in the framework of multivariate Ddomains.
There are several approaches to the zero test problem for Dalgebraic power series [16, 4, 5, 13, 18, 19, 15, 12] and we refer to [7] for a brief discussion. From a logical point of view, the most important decision problems for power series solutions to algebraic differential equations with initial conditions were settled in [4, 5]. One essential tool in this paper is the computation of a generalization of root separation bounds for more general decision problems. In a sense, this paper merely consists of specializations of these results to more specific problems. Nevertheless, we think that we introduced some noteworthy improvements which we will point out now.
It should first be emphasized that the papers [4, 5] are based on a more general decision procedure for testing whether systems of differential equations and inequations with initial conditions admit solutions. Efficiency is not the major concern here. The authors also do not attempt to state their results in terms of classical differential algebra, even though they are aware of this possibility. From our point of view, one main contribution of this paper is to isolate the part of the problem which can be dealt with classical differential algebra techniques from the part where initial conditions and root separation bounds come in (notice that [15] provides an interesting alternative way to achieve this goal). As a consequence, we can explicitly state our zero test algorithms, which we also believe to be more efficient.
Our approach also contains a few theoretical improvements. First of all, we mainly work over a so called “effective power series domain” instead of the constant field . For instance, we may take , where
is the differentially transcendental power series involved in the EulerMaclaurin formula for the function. Similarly, the conditions on the constant field are slightly weaker: we merely require an effective constant field with an algorithm for the computation of all positive integer roots of polynomials in . The improved zero test from section 3.4 also allows for power series which depend on parameters.
These theoretical improvements were actually introduced in [7], but the current presentation is a bit simpler and more systematic. In particular, we only need to consider logarithmic power series instead of logarithmic transseries in the correctness proof of the improved zero test from section 3.4. Furthermore, we added algorithms for the computation of non degenerate annihilators and root separation bounds in the degenerate case.
Section 4 contains no theoretical improvements, but we expect the more flexible framework of Ddomains to be most suitable for practical computations. It is interesting to see that the root separation bounds and both zero tests from section 3 can be generalized to this setting. In particular, for the computation of root separation bounds, we introduced the dominant Hermite normal form, which seems interesting in its own right.
As we show in section 5, a multivariate Dalgebraic series in can always be regarded as a Dalgebraic series in whose coefficients are Dalgebraic series in . From the logical point of view, decision problems for such series therefore reduce to their univariate counterparts. However, there are a few additional operations, such as solving implicit equations, extraction of coefficients and monomial transformations. Not only do we present algorithms for carrying out such operations, but we also discuss ways to make this efficient, in the framework of multivariate Ddomains.
Let us recall some standard notations from differential algebra. Let be a field of characteristic zero and let be a differential algebra which is also an integral domain. We will mainly work with respect to a single derivation . Given a finite number of indeterminates , we will denote by or simply by the differential ring of differential polynomials in and by or its fraction field.
We will assume an admissible ranking on the set . For instance, we may take whenever or and . Given such a ranking, the leader of a differential polynomial is the highest variable occurring in when is considered as a polynomial in . We will denote by the leader of . Considering as a polynomial in , the leading coefficient is called the initial of , the separant, and we will denote . If has degree in , then the pair is called the Ritt rank of and such pairs are ordered lexicographically. We will also denote in that case.
Given , we say that is reducible with respect to if there exists an such that or and . The process of Ritt reduction provides us with a relation of the form
where , , and where is reduced with respect to . We will denote .
Given , we recall that stands for the differential ideal generated by . If , then we also denote and recall that
forms a differential ideal. We say that forms an autoreduced sequence if Ritt reduces to itself with respect to for each . In that case the set of differential polynomials which reduce to zero with respect to coincides with the differential ideal .
In section 5, we will also consider differential rings and differential polynomials with respect to a finite number of pairwise commuting derivations . In that case, has to be replaced with the set of all expressions with and . The notion of admissible rankings and Ritt reduction can be extended to this setting and we refer to classical textbooks on differential algebra [17, 14] for more details.
In order to explicitly write down a differential polynomial , it is convenient to use vector notation. We will index differential monomials by vectors where each is itself a finite sequence which may be padded with zeros whenever necessary. We denote
after which we may write
with . For a fixed degree , it will be convenient to denote by the homogeneous component of of degree :
The largest with will be called the degree of and we will denote it by . The smallest with will be called the differential valuation of and we denote it by . It will also be convenient to denote and .
Given a differential polynomial and a “point” , it is often convenient to consider the additive conjugate of by , which is defined to be the unique differential polynomial with
for all . The coefficients of can be expressed directly by using a Taylor series expansion:
In particular, we get .
Assume now that and . Given , we will denote by its valuation in . This valuation naturally extends to differential polynomials in via the inclusion .
Assume now that and let be a homogeneous differential polynomial in a single variable of degree . Then we will denote by the polynomial with
For any series , we then have
We will also denote by the largest root of in , while taking if no such root exists.
For some purposes, we will occasionally consider logarithmic power series . Such series can still be considered as power series in and we will still denote by the valuation of in . The coefficients are polynomials in , and we will write . Notice that maps into itself for each .
Proposition
Proof. Let us first prove the existence of . If , then we may write with , and the equation admits the solution
since for every . For general , we may write with and take with
Let be a field. Let be a differential subalgebra of for with the property that for all and such that , we have . We will also call a power series domain.
A series is said to be Dalgebraic over if it satisfies a non trivial differential equation with . In positive characteristic , any series satisfies the linear differential equation
whence any series is Dalgebraic (indeed, annihilates all series in ). From now on, we will therefore assume that has characteristic zero.
Proposition
Proposition
Assume now that is an effective power series domain. The most obvious way to effectively represent a Dalgebraic power series in is to represent it by a pair where is a computable series and a non trivial annihilator with . We define the multiplicity of as an annihilator of to be . We also say that the annihilator is non degenerate if , and notice that the multiplicity of a non degenerate annihilator is one. In order to make Proposition 3 effective, we will need a way to compute a non degenerate annihilator as a function of an arbitrary annihilator. This is not completely immediate, and we will postpone the presentation of an algorithm for doing so to the end of this section.
Let be Dalgebraic over with annihilator . Assume that there exists a number such that for any with and , we have . Then the smallest such number will be denoted by and we call it the root separation of at . It corresponds to the number of initial conditions which should be known in order to determine in a unique way as a root of . In fact always exists and, as we will see in the next section, we can give an algorithm to compute it under suitable assumptions.
Proposition
(1) 
Proof. Let and . Given with , we have
Now assume that . Then
whence
The following proposition also provides us with a partial converse.
Proposition
Proof. Notice that implies that , so that is finite. Let . We have to show the existence of a unique series with and . We may decompose
Extracting the coefficient of in the relation now yields
We say that is effective if its elements can be represented effectively and if all field operations can be carried out by algorithms. We will call an effective diophantine field if all positive integer roots of polynomials over can be determined by algorithm. In particular, this means that admits an effective zero test, i.e. there exists an algorithm which takes an element of on input and which returns true if and false otherwise.
A power series is said to be computable, if there exists an algorithm for computing as a function of . The power series domain will said to be effective, if its elements are all effective power series and if the differential algebra operations can be carried out by algorithms. We notice that the differential algebra of all computable series is effective, although it does not admit an effective zero test.
Assume now that we are given an effective power series domain with an effective zero test over an effective diophantine field . Assume also that we are given an effective Dalgebraic power series and an annihilator for . Assume finally that the annihilator has multiplicity one, so that we may compute and by expanding the power series coefficients of . In other words, the bound (1) from Proposition 4 provides us with an effective upper bound for .
Given a polynomial , we will now give an algorithm ZeroTest (or when we want to make the dependency on and explicit) for testing whether . In particular, this shows that the algebra is again an effective power series domain.
Algorithm ZeroTest
1  If then return the result of the test 
2  If then return 
3  If then return 
4  If then return 
5  Let 
6  Return the result of the test 
Proof. We first notice that recursive calls only occur for differential polynomials of a strictly smaller Ritt rank. This guarantees termination of the algorithm.
As to its correctness, we clearly have at line 2 whenever .
Assume now that we reach line 3 with . Then the degree of in its leader cannot be one, since this would imply , and we know that . Consequently, is a constant multiple of , whence for some and , and .
Assume now that we reach line 4. We have for some and . Since and , it follows that .
Remark
Remark
In practical applications, the series is often the solution of a classical initial value problem, in which case . One disadvantage of the zero test from section 3.3 is that still depends on in quite an unpredictable way. In particular, even for simple , this quantity might a priori become arbitrarily large. In this section, we will give an improved version of our algorithm which does not have this drawback. The idea is to not only consider ordinary power solutions to our differential equations, but also logarithmic power series solutions in , as introduced in section 2.5.
Let be Dalgebraic over with annihilator . Assume that there exists a number such that for any with , we have . Then the smallest such number will be denoted by and we call it the strong root separation of at . Proposition 4 naturally strengthens to this setting:
Proposition
(4) 
Proof. The proof is similar to the proof of Proposition 4 with the following change. Writing with , we now have
The consideration of logarithmic solutions leads to a better bound for the existence part of Proposition 5.
Proposition
Proof. The proof is analogous to the proof of Proposition 5, with the exception that (3) should be replaced by
In the proof of the algorithm ZeroTest, we only needed the existence of the solution with and . In view of what precedes, we may thus improve the algorithm as follows:
Algorithm ZeroTest
1  If then return the result of the test 
2  If then return 
3  If then return 
4  If then return 
5  Let 
6  Return the result of the test 
Remark
Assume that we are given an effective power series domain with an effective zero test over an effective diophantine field . Assume also that we are given an effective Dalgebraic power series and an annihilator for . Let us show how the zero test algorithm from the previous section can be used in order to compute , thereby providing an effective bound for via Proposition 4.
Algorithm RootSeparationBound
1  Let is an index with 
2  Repeat the following 
3  Let and 
4  If then return 
5  Let and be indices with , , , and set 
6  Let 
7  If then 
8  Let be such that and 
9  If for all with and , then return 
10  Let be an index with 
Proof. We first notice that strictly decreases at every iteration of the main loop, which implies the termination of our algorithm. As a loop invariant, we also notice that (whence ) whenever we are at line 3, which means that we indeed have an algorithm for the computation of . If , then the correctness of line 4 follows from Proposition 4. Otherwise, we construct such that and , whence the computability of at line 6. If at line 7, then Proposition 5 implies the existence and uniqueness of at line 8, and the relation (3) actually provides us with an algorithm to compute the coefficients of . Moreover and satisfy the assumptions for applying the algorithm to and its derivatives at line 9.
Remark
Proposition
Proposition
We are now in a position to make Proposition 3 effective. We first need a general algorithm for computing algebraic dependencies.
Proposition
If is a Dalgebraic power series with non degenerate annihilator of order , then the algebra is contained in the algebra ,which is stable under .
Proposition
Proof. Let be represented by pairs and with . Applying Proposition 13, we may assume without loss of generality that the annihilators and are non degenerate, of orders and . Then
Of course, the algorithm from the proof of Proposition 14 uses brute force for for finding algebraic relations, so any algorithm which relies on this method is deemed to be quite inefficient. In the section 4 below, we will discuss algorithms which avoid relying on Proposition 14 for the computation with Dalgebraic series.
In the algorithms from the previous sections, we essentially represent Dalgebraic power series by elements of , where is a computable root of a differential polynomial . Since is itself an effective power series domain with an effective zero test, we may also form towers and represent Dalgebraic power series by elements of such towers. This generalization is useful for representing expressions involving , the algebra operations, and other Dalgebraic operations such as , , etc. Indeed, differential polynomials which annihilate such expression can quickly become quite large. In this section, we will introduce an even more convenient representation based on differential algebra, which generalizes the construction of towers and provides more flexibility for representing solutions to implicit equations at the end of section 5.5.
An abstract Ddomain over is a differential algebra over of finite transcendence degree over together with a differential algebra morphism which we will call the evaluation mapping. A second abstract Ddomain with evaluation mapping is said to be equivalent to if admits the same image as . Assuming that is an effective power series domain with an effective zero test over an effective diophantine field , we say that is effective if is computable and is computable for each ; in that case, we also say that is an effective Ddomain.
A Ddomain is an abstract Ddomain of the form
where are such that
and where the leaders of are for certain . It will be convenient to lift the evaluation mapping to using . Then becomes simply the evaluation mapping at . In particular, is effective as soon as is a tuple of computable power series. We will call the Ddomain unmixed if for each . We will call the Ddomain Pfaffian if is of the form with for all .
Given any Ddomain over and , the series is Dalgebraic over . If is effective, then we may effectively compute an annihilator for .
Any Dalgebraic series over is represented by an element of an unmixed Ddomain over . If is effective and , then we may effectively compute such a .
Proposition
Proposition
Proof. Let us first show that is equivalent to a Ddomain with orders . Modulo the replacement of by , we may assume without loss of generality that and for all . Now consider formal variables with and . Let , and consider the algebra morphism , with for and . Consider the set of all polynomials with if and . Let be the ranking on . We define a ranking on by setting whenever or and . Then the Ddomain with is equivalent to .
Before we generalize the zero test algorithms from section 3, we will need a way to asymptotically normalize systems of linear differential equations with power series coefficients. The normalization that we will use is an asymptotic variant of the Hermite normal form.
Let us first consider a square matrix . We say that is in Hermite normal form if is upper triangular and there exist integers such that
whenever or .
for all .
If has maximal rank , then we must have for each . It can be shown that there exists a unique unimodular matrix such that is in Hermite normal form. Moreover, and there is an algorithm for the computation of if is an effective field.
Let us now consider a matrix of rank . Recall that commutes with following the law . For each we will denote by the th row of and by
its “skew dominant coefficient”, where
If , then we understand that . The matrix with rows will be called the row dominant matrix of . We say that is in dominant Hermite normal form if is in Hermite normal form. Any matrix with an inverse in and such that is in dominant Hermite normal form will be called a normalization matrix. We claim that such a matrix always exists. What is more, if the entries of are computable power series, then we may use the following algorithm to compute .
Algorithm NormalizationMatrix
1  Let 
2  Let be such that is in Hermite normal form 
3  Let be such that 
4  If has rank then return 
5  Otherwise return 
Proof. If , then is a normalization matrix of by construction. If , and assuming that is a normalization matrix of , then is a normalization matrix of , since is always invertible. This proves the correctness of the algorithm.
As to its termination, let and let be minimal such that the matrix with rows has rank for each . We claim that can only increase and that the tuple can only decrease for the lexicographical ordering, once stabilizes.
Indeed, let , , and let be minimal such that the matrix with rows has rank for each . Then the rows are precisely the first rows of the Hermite normal form , and therefore form a matrix of rank . This shows that and for each .
Given an abstract Ddomain , we claim that there exists a number such that for any alternative evaluation mapping on , we have whenever for all . The minimal such number will be called the root separation for and we denote it by . Our claim clearly holds when is an unmixed Ddomain. Indeed, in this case, we have
with the notations from above. The general case reduces to this particular case by applying Proposition 17. However, since the computation of univariate differential polynomials which annihilate given elements of may be quite expensive, we would like to have a more direct bound. We first need a few preliminaries.
Consider linear differential polynomials . Any such polynomial can formally be viewed as an element of . Let be the matrix with . Assuming that this matrix has rank over , we may use the method from section 4.2 to compute a normalization matrix for . We will call such a a normalization matrix for . Applying to the column vector with entries we obtain a new column vector with “dominant reduced” entries . By construction, the dominant coefficient of is a polynomial of the form
with . We will denote by the polynomial with . We also denote by the largest root of in , or if no such root exists.
Now consider a Ddomain with evaluation mapping . Let . Since , each is non zero and has the same leader as . In particular, the polynomials are linearly independent over . Consequently, there exists a normalization matrix for , whence and are well defined for all .
Proposition
Proof. Let for each . Given with , there exists a largest index with . We have
Assuming that , we also have
whence
In order to generalize the zero test from section 3.3 to the setting of Ddomains, we first need a suitable counterpart of Proposition 5 in addition to Proposition 19.
Assume that we are given a differential ring where are such that for certain . Given and , we would like to solve the system of equations for with . Assuming that for each , we may define and as in the previous section. Since is a normalization matrix, there also exists a matrix with . Then we have and . Now consider
We have the following analogue of Proposition 5 for solving the equation for sufficiently small .
Proposition
Proof. By what precedes, it suffices to show that the equation admits a unique solution with . Let for each . Recall that we may write
for each , with . We now decompose each as
Extracting the coefficient of in the relation now yields
Algorithm ZeroTest
1  If then return the result of the test 
2  If then return 
3  If then return 
4  If for some then return 
5  Let be an upper bound for 
6  Let be such that for some 
7  Let 
8  Return the result of the test 
The zero test from the previous section may be further improved along similar lines as what we did in section 3.4. Given an abstract Ddomain , the strong root separation for is the smallest number such that for any alternative “evaluation” mapping , we have whenever for all . The existence of such a number is shown in the same way as before and for Ddomains we have the usual explicit bound:
Proposition
Proof. The proof is similar to the proof of Proposition 19. This time, , we again pick to be the largest index with , so that we may write with . In the same way as before, we now get
For the analogue of Proposition 20, we define
Proposition
Proof. The proof is similar to the proof of Proposition 20, except that (7) should be replaced by
In the algorithm ZeroTest from the previous section, it is now possible to replace by in the definition of .
Given a subring , we will denote by the intersection of with the fraction field of . We say that is a power series domain if is a differential ring with respect to the derivations such that and is stable under the substitutions .
A power series domain is said to be effective, if the differential algebra operations can be carried out by algorithms and for each we have a computable mapping which takes on input and returns as a computable power series in . In particular, this means that every can be regarded as an computable power series in the sense that there exists an algorithm which takes on input and returns the coefficient of in on output. We also notice that the quotient field of an effective power series domain is an effective differential field, when representing fractions in the usual way. In particular, is an effective differential algebra.
The above definitions can be generalized to countable dimension as follows. We let , where we regard each as being naturally included into . A subset is said to be a power series domain if is a power series domain for each . We say that is effective if each is and if we have an algorithm for computing an upper bound for the dimension of any given series .
Given , we let denote the evaluation of at . Given and with , we define the composition of and to be the unique power series with
We say that a power series domain is stable under composition if for any and with . If we also have an algorithm for the computation of , then we say that is effectively stable under composition.
Let with and . Assume that the matrix formed by the first columns of the scalar matrix
is invertible. Then the implicit function theorem implies that there exist unique power series , such that the completed vector with satisfies . We say that a power series domain satisfies the implicit function theorem if for the above solution of , whenever . We say that effectively satisfies the implicit function theorem if we also have an algorithm to compute as a function of .
Consider an invertible matrix with rational coefficients. Then the transformation
is called a monomial transformation, where we consider as a column vector. For a power series whose support satisfies , we may apply the monomial transformation to as well:
A power series domain is said to be stable under monomial transformations if for any and invertible matrix with , we have . We say that is effectively stable under monomial transformations if we also have an algorithm to compute as a function of and . Notice that we do not require the existence of a test whether in this case (the behaviour of the algorithm being unspecified whenever ).
Given an effective power series domain which effectively satisfies each of the above closure properties (composition, implicit functions, and monomial transformations), it can be shown that an effective version of the Weierstrass preparation theorem holds. We refer to [11] for details.
Let be a multivariate power series domain. Given a power series and , we may consider as a power series
in with coefficients in , and also as a power series in with coefficients in the fraction field of . If is Dalgebraic over for this latter interpretation of , then we say that is Dalgebraic in (or with respect to ). We say that is Dalgebraic over if is Dalgebraic in each of the variables .
Proposition
Proof. Given , let be a non trivial differential equation satisfied by in . Let denote the valuation of in . Modulo division of by , we may assume without loss of generality that . Now satisfies the non trivial equation . This shows that is Dalgebraic over .
For , we will prove by induction that is Dalgebraic over . So assume that are Dalgebraic over , whence over . Given , the series
Proposition
From now on, denotes the set of differential polynomials with respect to the pairwise commuting derivations . Proposition 2 also admits a natural generalization:
Proposition
Assume now that is an effective multivariate power series domain. We may effectively represent a Dalgebraic series over by a tuple where is a computable power series in and an annihilator for with respect to , for each .
Proposition
Proof. We prove the proposition by induction over . For , the result is trivial, so assume that and that we proved the result for all smaller .
Given , let us first show how to compute the coefficients of the power series expansion of a given with respect to . The induction hypothesis provides us with a zero test in . In the proof of Proposition 23, we thus have an algorithm for the computation of the valuation , and the remainder of this proof is constructive.
Let denote the quotient field of . We claim that is an effective diophantine field. Indeed, given a polynomial , an integer is a root of if and only if is a root of multiplied by the denominators of its coefficients. Without loss of generality, we may therefore assume that . After this reduction, is a root of if and only if is a root of the coefficient of in for all . Let be such that this coefficient is non zero and let be its largest root in (or if no such root exists). We may now check whether for and compute the largest root of in .
Let be a multivariate power series domain. An abstract multivariate Ddomain over is a differential algebra for together with a differential algebra morphism . A multivariate Ddomain over is an abstract multivariate Ddomain over of the form
where are such that
and where each only involves derivations of the form and admits a leader of the form for certain . We will denote by those elements of the fraction field of such that . We will also write for . We say that is effective if is an effective power series domain and is computable for each . We say that is unmixed if for all and . We say that is Pfaffian if is of the form with for all .
The proof of the following proposition is analogous to the proof of Proposition 16:
Given any multivariate Ddomain over and any , the series is Dalgebraic over . If is effective, then we may effectively compute .
Corollary
The proofs of the following propositions are analogous to the proofs of Propositions 17 and 18:
Proposition
Proposition
Consider with and . Let be such that for all and assume that . Then we recall from differential algebra [17, 14] that the polynomial of and is defined by
Given a Ddomain as above, we say that is coherent if for all . Given an arbitrary effective Ddomain , we may compute an equivalent effective coherent Ddomain using the algorithm below, where we make use of the effective zero test from Corollary 28:
Algorithm MakeCoherent
1  Let 
2  Repeat the following 
3  Let 
4  If there exists with , then set 
5  Else if with and , then set 
6  Else if with and , then set 
7  Else if with and , then set 
8  Else if with and , then set 
9  Else return with the evaluation induced by 
By Proposition 26, there exists an algorithm for extracting the coefficients of multivariate Dalgebraic power series with respect to a single variable . In the case of power series which are represented by elements of a Ddomain , it would be nice to have a more efficient and systematic algorithm. In particular, given and , we would like to effectively construct a Ddomain such that for any and any , we can produce a with . For this, it suffices that contains all coefficients of the form with and .
Theoretically speaking, we may construct using Proposition 26. As a first optimization, we claim that there exists a computable finite set such that we can take whenever . Indeed, for any , we may regard as a power series in with coefficients in the quotient field of . For sufficiently large , the coefficients of this power series are determined by a recurrence relation of type (3).
As a second optimization, assume that is Pfaffian and regular in , meaning that the valuations of the in all vanish. We will take
with for all , and where the differential ideal is constructed as follows. Given and , let
Since , extracting the coefficient of in the equation
yields a relation
for some . Now let be the set of all differential polynomials of the form . Then we may take . Notice that is again Pfaffian if is Pfaffian. However, regularity in the other variables is not necessarily preserved. Nevertheless, if for all , then the recursive extraction of coefficients only yields regular Pfaffian Ddomains.
Remark
From what precedes, it follows in particular that there exists a constant such that we may take for all . Hence for any and any , we have . We may then reinterpret the Ddomain as a univariate Ddomain by only retaining the relations and using coefficients of the form with . This allows us to replace the theoretical zero test from Corollary 28 by one of the more efficient zero tests from section 4.
Let with and denote
Let be the submatrix spanned by the first columns of and the submatrix spanned by the last columns. Assume that is invertible. Then the implicit function theorem implies that there exist unique power series , such that the completed vector with satisfies
(9) 
Forming the Jacobian matrix of , we let and denote the submatrices spanned by the first resp. last rows. Differentiating the relation (9), we obtain
Since , this yields
Given any , consider the row matrices and . Then
Let , and . Let and diagonal matrices with entries resp. . Denoting by the row vector of derivations with
the above relation implies
Notice that maps power series to row vectors of power series.
Assume now that for some effective multivariate Ddomain of dimension over with
Assume also that the coordinate functions are among the . We will make the additional assumption that for all . In particular, setting , we have . We introduce a new evaluation mapping on by taking
We may formally extend into an evaluation from into . This evaluation is not compatible with the original derivations , but we do have
for all , and for each of the derivations . Since is finite dimensional as a algebra, Proposition 14 implies that satisfy a computable non trivial algebraic relation for each and . Using Proposition 13, we may assume without loss of generality that these relations are non degenerate. We now define a new multivariate Ddomain for the derivations , by taking
and taking the evaluation mapping as in (10). By construction, for each , so is Dalgebraic.
The additional assumption that for all is quite benign. Indeed, for any index with , postcomposition of with means in particular sending to . This can also be achieved by extracting the coefficient of in . Consequently, modulo a finite number of extraction of coefficients, we may assume without loss of generality that for all . We have proved:
Proposition
Proposition
The construction of is somewhat unsatisfactory since the computation of individual algebraic relations using Proposition 14 is usually quite expensive. In the frequent situation that for all , we may use the following more efficient technique. Using Proposition 30, we may assume without loss of generality that the original relations are of the form with . By assumption, we have for all . With the above notations, let denote the horizontal concatenation of the matrices and , so that for all and has coefficients in . Modulo the relations , we may then write
For each and , this means that there exists a power product of the and a polynomial with
This allows us to take in the construction of the multivariate Ddomain instead of the relation found using Proposition 14.
Proposition
Proof. Let be an effective Ddomain over , let and . Given an invertible matrix with , we have to show how to compute . Given a row vector , we will denote . Then for any power series and any , we notice that
The above proposition could in principle be used for computing with monomial transforms of elements in an effective Ddomain . However, in a similar way as it is more efficient to keep fractions in their symbolic forms when computing with fractions in , it is often more efficient to keep monomial transforms in their symbolic form when computing with them.
More precisely, we first notice that monomial transformations with are somewhat easier, since they can be computed using our algorithms for composition. Now two monomial transformations and are said to be compatible if there exist invertible matrices with . Given with and , let , and . Since and have coefficients in , we may construct a Ddomain with . Now and . In particular, any can be represented by for some .
From the geometrical point of view, the notion of compatibility can be interpreted as follows. Let be an open subset of and . Let be the subset of of series with . Given a monomial transformation , we call its associated cone. Given two monomial transformations and with associated cones and , it is not hard to check that these transformations are compatible if and only if .
It should be emphasized that zero testing of power series is a quite asymmetric problem in the sense that it is usually easy to prove that a non zero series is indeed non zero (it suffices to find a non zero coefficient), whereas it is often hard to prove vanishing series to be zero. Since exact zero tests are so slow, it is often preferable to use heuristic zero tests instead. In fact, heuristic zero tests can be combined with genuine zero tests: for a complex computation which involves many zero tests, we first perform all zero tests heuristically. At the end of the computation, we collect all series which were heuristically assumed to be zero in order to compute the result, and we apply an exact zero test to these series.
The most obvious heuristic zero test for a univariate Dalgebraic series is to compute all coefficients up to a fixed order. Even for large orders, these coefficients can be computed efficiently using Newton's method or relaxed power series evaluation [2, 8, 1, 9, 6, 10]. In the case of multivariate Dalgebraic series , one simple idea is to generate a random scalar vector and to test whether . The composition can be computed efficiently using the algorithm(s) from section 5.5. Here we notice the remarkable fact that the derivation
for which
does not depend on . This makes it actually possible to design another exact zero test for multivariate Dalgebraic series: compute a univariate Ddomain capable of representing where is treated as a formal parameter, and test whether .
[1] A. Bostan, F. Chyzak, F. Ollivier, B. Salvy, É. Schost, and A. Sedoglavic. Fast computation of power series solutions of systems of differential equations. In Proceedings of the 18th ACMSIAM Symposium on Discrete Algorithms, pages 1012–1021, New Orleans, Louisiana, U.S.A., January 2007.
[2] R.P. Brent and H.T. Kung. Fast algorithms for manipulating formal power series. Journal of the ACM, 25:581–595, 1978.
[3] J. Della Dora, C. Dicrescenzo, and D. Duval. A new method for computing in algebraic number fields. In G. Goos and J. Hartmanis, editors, Eurocal'85 (2), volume 174 of Lect. Notes in Comp. Science, pages 321–326. Springer, 1985.
[4] J. Denef and L. Lipshitz. Power series solutions of algebraic differential equations. Math. Ann., 267:213–238, 1984.
[5] J. Denef and L. Lipshitz. Decision problems for differential equations. The Journ. of Symb. Logic, 54(3):941–950, 1989.
[6] M. J. Fischer and L. J. Stockmeyer. Fast online integer multiplication. Proc. 5th ACM Symposium on Theory of Computing, 9:67–72, 1974.
[7] J. van der Hoeven. A new zerotest for formal power series. In Teo Mora, editor, Proc. ISSAC '02, pages 117–122, Lille, France, July 2002.
[8] J. van der Hoeven. Relax, but don't be too lazy. JSC, 34:479–542, 2002.
[9] J. van der Hoeven. Newton's method and FFT trading. JSC, 45(8):857–878, 2010.
[10] J. van der Hoeven. Faster relaxed multiplication. Technical report, HAL, 2012. http://hal.archivesouvertes.fr/hal00687479.
[11] J. van der Hoeven. Effective power series computations. Technical report, HAL, 2014. http://hal.archivesouvertes.fr/.
[12] J. van der Hoeven and J.R. Shackell. Complexity bounds for zerotest algorithms. JSC, 41:1004–1020, 2006.
[13] A.G. Khovanskii. Fewnomials, volume 88 of Translations of Mathematical Monographs. A.M.S., Providence RI, 1991.
[14] E.R. Kolchin. Differential algebra and algebraic groups. Academic Press, New York, 1973.
[15] A. PéladanGerma. Tests effectifs de nullité dans des extensions d'anneaux différentiels. PhD thesis, Gage, École Polytechnique, Palaiseau, France, 1997.
[16] R.H. Risch. Algebraic properties of elementary functions in analysis. Amer. Journ. of Math., 4(101):743–759, 1975.
[17] J.F. Ritt. Differential algebra. Amer. Math. Soc., New York, 1950.
[18] J. Shackell. A differentialequations approach to functional equivalence. In Proc. ISSAC '89, pages 7–10, Portland, Oregon, A.C.M., New York, 1989. ACM Press.
[19] J. Shackell. Zero equivalence in function fields defined by differential equations. Proc. of the A.M.S., 336(1):151–172, 1993.