<\body> ||<\author-address> de Mathématiques ( 425) Université Paris-Sud 91405 Orsay Cedex France |>|||> <\abstract> One approach for computations with special functions in computer algebra is the systematic use of analytic functions whenever possible. This naturally leads to problems of how to answer questions about analytic functions in a fully effective way. Such questions comprise the determination of the radius of convergence or the evaluation of the analytic continuation of the function at the endpoint of a broken like path. In this paper, we propose a first definition for the notion of an effective analytic function and we show how to effectively solve several types of differential equations in this context. We will limit ourselves to functions in one variable. ||>>|| >>|>>>>>>|| || >>|>>>>>>>>>>>>>>>>>>>>>>>>>>>> An important problem in computer algebra is how to compute with special functions which are more complicated than polynomials. A systematic approach to this problem is to recognize that most interesting special functions are analytic, so they are completely determined by their power series expansions at a point. Of course, the concept of ``special function'' is a bit vague. One may for instance study expressions which are built up from a finite number of classical special functions, like , or . But one may also study larger classes of ``special functions'', such as the solutions to systems of algebraic differential equations with constant coefficients. Let us assume that we have fixed a class > of such special functions or expressions for the sequel of this introduction. In order to develop a satisfactory computational theory for the functions or expressions in >, one may distinguish the following main subproblems: <\itemize> How to test whether \> locally represents the zero power series? How to evaluate \> safely and efficiently at any point where may be defined? The first subproblem is called the zero-test problem and it has been studied before in several works . For large classes of special functions, it turns out that the zero-test problem for power series can be reduced to the zero-test problem for constants (see also for a discussion of this latter problem). In this paper, we will focus on the second subproblem, while leaving aside the efficiency considerations and restricting our attention to functions in one variable. By ``safe evaluation'' of \> at , we mean that we want an algorithm which computes an approximation \(\+\*\)*2>> for with -f(z)\|\\> for any \2>>. If such an algorithm exists, then we say that is an effective complex number. By ``any point where the expression may be defined'', we mean that we do not merely plan to study near the point where a power series expansion was given, but that we also want to study all possible analytic continuations of . In other words, the aim of this paper is to develop an ``effective complex analysis'' for computations with analytic functions in a class > which we wish to be as large as possible. Such computations mainly consist of safe evaluations, bound computations for convergence radii and absolute values of functions on closed disks, and analytic continuation. Part of the philosophy behind this paper also occurs in , but without the joint emphasis on effectiveness at all stages and usefulness from the implementation point of view. In previous papers , we have also studied in detail the fast and safe evaluation of holonomic functions. When studying solutions to non-linear differential equations, one must carefully avoid undecidable problems: <\theorem> > Given a power series f*z> with rational coefficients, which is the unique solution of an algebraic differential equation <\equation*> P(z,f,\,f)=0, with rational coefficients and rational initial conditions, one cannot in general decide whether the radius of convergence (f)> of is 1> or 1>. What we will show in this paper, is that whenever we that an analytic function f*z> as in the above theorem may be continued analytically along an ``effective broken line path'' >, then the value )> of at the endpoint of > is effective (theorem ). We will also show that we may ``effectively solve'' any monic linear differential equation, without introducing singularities which were not already present in the coefficients (theorem ). In order to prove these results, we will carefully introduce the concept of an ``effective analytic function'' in section . The idea is that such a function is given locally at the origin and that we require algorithms for <\itemize> Computing coefficients of the power series expansion; Computing a lower bound > for the radius of convergence; Computing an upper bound for on any closed disk of radius \>. Analytic continuation. But we will require additional conditions, which will ensure that the computed bounds are good enough from a more global point of view. In section , we will show that all analytic functions, which are constructed from the effective complex numbers and the identity function using ,,>,,|\ z>,>,exp>> and , are effective. In section , we will study the resolution of differential equations. It is convenient to specify the actual algorithms for computations with effective analytic functions in an object oriented language with abstract data types (like ). Effective types and functions will be indicated through the use of a font. We will not detail memory management issues and assume that the garbage collector takes care of this. The program is a concrete implementation of some of the ideas in this paper , although this program works with double precision instead of effective complex numbers. A complex number > is said to be , if there exists an algorithm which takes a positive number \\2>> on input and which returns an approximation \(+\*)*2>> for with -z\|\\>. We will also denote the approximation by =z\>>. In practice, we will represent by an abstract data structure > with a method > which implements the above approximation algorithm. The of an effective complex number > is the asymptotic complexity of its approximation algorithm. We have a natural subtype \> of effective real numbers. Recall, however, that there is no algorithm in order to decide whether an effective complex number is real. In the sequel, we will use the notations >={x\\:x\0}> and >={x\\:x\0}>. Let > be a , in the sense that all elements of > can be represented by explicit data structures and that we have algorithms for the ring operations ,> and >>. If we also have an effective zero test, then we say that > is an . An over > is a series [[z]]>, such that there exists an algorithm in order to compute the -th coefficient of . Effective series over > are represented by instances of the abstract data type ()>, which has a method :\\[z]>, which computes the truncation +\+f*z\[z]> of at order as a function of \>. The of is the asymptotic complexity of this expansion algorithm. In particular, we have an algorithm to compute the -th coefficient > of an effective series. A survey of efficient methods to compute with effective series can be found in . When is an effective series in ()>, then we denote by >\\[[z]]> the actual series which is represented by . If >> is the germ of analytic function, then we will denote by (>)> the radius of convergence of >> and by >\|> the maximum of >\|> on the closed disk >={z\\:\|z\|\r}> of radius , for each \(>)>. An of an analytic function at is an abstract data structure > which inherits from ()> and with the following additional methods: <\itemize> A method :()\\{+\}> which returns a lower bound (f)\0> for (>)>. This bound is called the of . A method :\> which, given r\\(f)>, returns an upper bound > for >\|>. We assume that > is increasing in . <\remark> For the sake of simplicity, we have reused the notations (f)> and > in order to denote the applications of the methods > and > to . Clearly, one should carefully distinguish between (f)> > and (>)> >\|>. The -th coefficient of will always be denoted by =>>. Given an effective germ , we may implement a method >, which takes an effective complex number > with \(f)> on input, and which computes its effective value > at . For this, we have to show how to compute arbitrarily good approximations for : <\algorithm|(f,z,\)>> \; > and > with \(f)>, and \2>> an approximation > for with -f(z)\|\\> <\body> [Compute expansion order] <\indent> Let (f)+\|z\|)/2> and > Let \> be smallest such that <\equation*> *\|2> [Approximate the series expansion] <\indent> Compute =f+f*z+\+f*z\> Return \/2>> The correctness of this algorithm follows from Cauchy's formula: <\eqnarray*> -f(z)\|>|||2*\*\>*>*\ w>>||>||2*\>*>*\ w>>|||*>>||>||2>>>>> Any -tuple ,\,z)> of complex numbers determines a unique or in >, which is denoted by \z\\\z\z>. If =0>, then we call =z\z\\\z\z> a or a of >=l>. \ We denote by > the space of all paths and by > the space of all affine paths. The trivial path of length is denoted by >. An affine path \\\z> is said to be effective if ,\,z\>. We denote by > the type of all effective affine paths and by > the type of all effective paths. Another notation for the path \z\\\z\z> is -z,\,z-z]>; the first notation is called the ; the second one is called the . Let =[\,\,\]=0\z\\\z\z> and =[\,\,\>]=0\z\\\z-1>\z>> be two paths in >. Then we define their +\> by <\eqnarray*> +\>||,\,\,\,\,\>]>>|||z\\\z\z+z\\\z+z>.>>>> We also define the > of > by <\eqnarray*> \>||\,\,\\]>>|||z-z\\\z-z\-z.>>>> The of > is defined by <\equation*> \|\\|=\|\\|+\+\|\\|. Notice that \\|=\|\\|> and +\\|=\|\\|+\|\\|>. We say that > is a of > if l> and =\> for ,l>. In this case, we write \\> and -\=[\,\,\>]>, so that -\)+\=\> (however, we do not always have -\=\+(\\)>). The of two general paths > and > always exists and we denote it by \\>. If we restrict our attention to paths \\\z> such that ,\,z> are in a subfield of > with an effective zero-test (like [i]>), then >> and > are clearly effective. A of a path =[\,\,\]> is a path of the form =[\*\,\,\>*\,\,\*\,\,\>*\]>, where \(0,1)> with +\+\>=1> for all . If > is a subdivision of >, then we write \\>. Given ,\> and > with \\> and \\>, there exists a shortest path > with \\> and \\>. We call > the of > and > and we denote it by =\\\>. Given an analytic function at the origin, which can be continued analytically along a path \\>, we will denote by )> the value of at the endpoint of the path and by >> the analytic function at zero, such that >(\)=f(\+\)\f(\+[\])> for all sufficiently small >. If =[\,\,\]>, then we will also write >=f,\,+\>>. The of is the set of all paths > along which can be continued analytically. A is an instance of the abstract data type >, which inherits from >, and with the following additional method: <\itemize> :\> computes > for all with \(f)>. We will denote by >> the analytic function which is represented by . The f\Dom >> of is the set of all paths ,\,\]\> with <\equation*> \|\\|\\(f,\,+\>) for all {1,\,n}>. The functions > and > may be extended to the class >, for all paths which are in the domain of . <\remark> Notice that, similarly as in remark , we have reused the notation > in order to denote the application of the method > to . So > is again a quasi-effective analytic function, which should be distinguished from >=|\>>. Given =[\,\,\]>, we will also denote )=f,\,+\>(\)> and >=f,\,+\>>. Let and be two quasi-effective analytic functions. We will write as soon as >=>> . We say that and are , and we write g>, if and (f>)=\(g>)> and >\|=\|g>\|> for all \Dom f> and \(f>)>. We say that is than , if Dom g>, and (f>)\\(g>)> and >\|\\|g>\|> for all \Dom f> and \(f>)>. We say that a quasi-effective analytic function satisfies the , if <\description> If ,\+[\],\+[\,\],\+[\+\]\Dom f>, then ,+(\+\)>\f,+\,+\>>. Here we understand that ,+(\+\)>f>> if +\=0>. Let be a quasi-effective analytic function and consider the functions \\(f>)> and ,r)\\|f>\|>. We say that satisfies the , if there exist continuous functions <\eqnarray*> (f)>={z\\:\|z\|\\(f)}>|>|>;>>|,r)\B(f)>\\>:r\R(\)}>|>|>,>>>> such that (f>)=R(\)> and >\|=N(\,r)> for all \B(f)>\> and > with r\\(f>)>. We say that satisfies the , if <\description> >> satisfies the local continuity condition for each \Dom f>. We say that is an if it both satisfies the homotopy condition and the continuity condition. In what follows, we will always assume that instances of the type > satisfy the conditions and . <\remark> In fact, there are several alternatives for the definition of effective analytic functions, by changing the homotopy and continuity conditions. The future will learn us which conditions are best suited for complex computations. Nevertheless, there is no doubt that the ``spirit'' of the definition should be preserved: quasi-effectiveness plus additional conditions which will allow us to prove global properties. The > f> of an effective analytic function is the set of all paths >, such that there exists a subdivision > of > with \Dom f>. For a tuple ,\,f)> of effective analytic functions, we also define \\\Dom f> and similarly for > f> and >>. We say that an effective analytic function (or a tuple of such functions) is , if > f=Dom> >\>. We say that a subset of > is , if there exists an algorithm to decide whether a given effective path > belongs to >. Now let us choose constants ,\\(0,1)\> with \\> and consider an effective analytic function . Then the following algorithm may be used in order to evaluate at any path \Dom> f>: <\algorithm|(f,\)>> \; > and \>, such that \Dom> f> )> <\body> [Handle trivial cases] <\indent> If =\>, then return Write =[\]+\> Compute an approximation |~>\\*2>> of =\-|2>*\(f)> with |~>-\\|\|2>*\(f)>. If |~>\0> then return (f>,\)> [Subdivide path] <\indent> Let =\*\|\(f)\|*|\|\\|>> Return (f>,[\-\]+\)> We notice that |~>\0> implies \\(f)> and |~>\0> implies \\*\(f)>. The correctness proof of this algorithm relies on three lemmas: <\lemma> Let ,\\Dom f> with \\>. Then >\f>>. <\proof> Let us proof the lemma by induction over the difference of the lengths of > and >. If , then =\> and we are done. Otherwise, let > be longest, such that there exist paths > and > and numbers ,\> and > with =\+[\]+\> and =\+[\,\]+\>. If 1>, then the induction hypothesis implies <\equation*> f>\f+[\+\]+\>\f>. Otherwise, we have =\> and =\+\>. Consequently, the homotopy condition implies that +[\]>\f+[\,\]>>, whence >\f>>. <\remark> In fact, the above lemma even holds for homotopic paths ,\\Dom f>, but we will not need this in what follows. <\lemma> If ,\,\\Dom f> are such that \\> and \\>, then \\\Dom f>. <\proof> Let \\=[\,\,\]>. Let us prove by induction over that =[\,\,\]\Dom f>. If , then we have nothing to do. Assume now that we have proved the assertion for a given l> and let us prove it for . Since \\> is the shortest common subdivision of > and >, there exist a {1,2}> and a path \\>, such that \\>. By lemma , we have (f>)=\(f>)>. Therefore, \|\\|\\|\\(f>)>, where > is such that +[\]\\>. This shows that \Dom f>, as desired. <\lemma> Let \Dom f>. Then there exists a \0> such that for any \Dom f>, with \\> for some \\>, we have (f>)\\>. <\proof> Write =[\,\,\]> and let {1,\,l}>. The continuity condition implies that the function \[0,\]\\\(f,\,+\,+\>)> is the restriction of a continuous function on the compact set ]>. Consequently, there exists a lower bound \0> for on \]>. On the other hand, any \Dom f>, with \\> for some \\>, is a subdivision of a path of the form ,\,\,\]> with \[0,\]>. Taking =min {\,\,\}>, we conclude by lemma . <\render-proof|Proof of the algorithm> The algorithm is clearly correct if it terminates. Assume that the algorithm does not terminate for =[\,\,\]> and let =[\,\,\>]> be a subdivision of > in . Let ,\,\> be the sequence of increments, such that > is called successively for >,f,+\>,\>. Let \0> be the constant we find when applying lemma to >. Since +\+\\\|\\|>, there exists an with \|\\*\>. Now let =[\,\,\]> and let be such that ,\,\]\|\\|\\|> and ,\,\]\|\\|\\|>. Then =[\,\,\,\+\+\+\-\-\-\]\Dom f>. By lemma , it follows that \\\Dom f>. Hence (f>)=\(f\\)>)\\>. This yields the desired contradiction, since \|=\*\(f>)\\*\>. In this section we will show how to effectively construct elementary analytic functions from the constants in > and the identity function , using the operations ,,>,,|\ z>,>,exp> and . In our specifications of the corresponding concrete data types which inherit from >, we will omit the algorithms for computing the coefficients of the series expansions, and refer to for a detailed treatment of this matter. Constant effective analytic functions are implemented by the following concrete type > which derives from > (this is reflected through the >> symbol below): <\class|\>> \; <\body> <\itemize> > :\\z\> :()\\> :r\\|z\|> :\\(z)> The method > is the constructor for >>>. In the method > it is shown how to call the constructor. In a similar way, the following data type implements the identity function: <\class|\>> \; <\body> <\itemize> > :()\z\0> :\\z\> :()\\> :r\\|z\|+r> :\\(z+\)> The default constructor with zero arguments returns the identity function centered at . The other constructor with one argument returns the identity function centered at . The conditions and are trivially satisfied by the constant functions and the identity function. They all have domain >. The addition of effective analytic functions is implemented as follows: <\class|\>> \; <\body> <\itemize> > :(\,\)\f\;g\> :()\min(\(f),\(g))> :r\\|f\|+\|g\|> :\\(f>,g>)> We clearly have Dom g> and >\f>+g>> for all paths in (f+g)>. Consequently, condition is satisfied by . Since > is a continuous function, condition is also satisfied. Subtraction is implemented in the same way as addition: only the series computation changes. Multiplication is implemented as follows: <\class|\>> \; <\body> <\itemize> > :(\,\)\f\;g\> :()\min(\(f),\(g))> :r\\|f\|*\|g\|> :\\(f>,g>)> We again have Dom g> and the conditions and are verified in a similar way as in the case of addition. In order to differentiate an effective analytic function , we have to be able to bound > on each disk > with \(f)>. Fixing a number \(0,1)\>, this can be done as follows: <\class|\>> \; <\body> <\itemize> > :\\f\> :()\\(f)> :r\>*\|f\|>>, where \(f)+\*(\(f)-r)> :\\(f>)> Let us show that the bound for the norm is indeed correct. Given the bound > for on >>, Cauchy's formula implies that \|\\|f\|/s> for all . Consequently, for all >>: <\equation*> \|f(z)\|=>(n+1)*f*z\>(n+1)*\|f\|*|s>=>*\|f\|. We have =Dom f> and the fact that > is a constant, which is fixed once and for all, ensures that condition is again satisfied by >. The actual choice of > is a compromise between keeping > as small as possible while keeping as large as possible.\ Bounding the value of an integral on a disk is simpler, using the formula <\equation*> >f(z)\\|\\|*max[0,d]> \|f(z)\|>. For the analytic continuation of integrals, we have to keep track of the integration constant, which can be determined using the evaluation algorithm from section . In the algorithm below, this integration constant corresponds to . <\class|\>> \; <\body> <\itemize> > > :\f\>, 0> :(\,\)\f\,c\> :()\\(f)> :r\\|c\|+r*\|f\|> :\\(f>,(\))> The domain of f(t)*\ t> is the same as the domain of . In order to compute the inverse of an effective analytic function with 0>, we should in particular show how to compute a lower bound for the norm of the smallest zero. Moreover, this computation should be continuous as a function of the path. Again, let \(0,1)\> be a parameter and consider <\class|\>> \; <\body> <\itemize> > :\\f\> :()\min\*\(f),\|*\(f)>>>> :r\\|*r>>> :\\(f>)> Again, the choice of > is a compromise between keeping *\(f)> reasonably large, while keeping the bound \|*\(f)>> as small as possible. We have <\equation*> Dom> ={\\Dom> f\|f(\)\0}. Notice that we cannot necessarily test whether )=0>. Consequently, > > is not necessarily effective. <\remark> Instead of fixing a \(0,1)\>, it is also possible to compute > such that <\equation*> \*\(f)=\|*\(f)>>, using a fast algorithm for finding zeros, like the secant method. The logarithm of an effective analytic function can be computed using the formula <\equation*> log f=log f(0)+(t)|f(t)>*\ t. As to exponentiation, we use the following method: <\class|\>> \; <\body> <\itemize> > :\\f\> :()\\(f)> :r\exp \|f\|> :\\(f>)> We have > (log f)={\\Dom> f\|f(\)\0}> and =Dom f>. In this section, we will show how to effectively solve linear and algebraic differential equations. As in the previous section, we will omit the algorithms for computing the series expansions and refer to . We will use the classical majorant technique from Cauchy-Kovalevskaya in order to compute effective bounds. Given two power series g\\[[z,\,z]]>, we say that is by , and we write g>, if \>[[z,\,z]]> and ,\,k>\|\g,\,k>> for all ,\,k\\>. If and >, then we write g> if >\g>. Given \0>, we also denote >=(1-\*z)1>>. Let ,\,L> be effective analytic functions and consider the equation <\equation> f=L*f+\+L*f with initial conditions (0)=\,\,f(0)=\> in >. We will show that the unique solution to this equation can again be represented by an effective analytic function , with \\\Dom L>. Notice that any linear differential equation of the form *f+\+L*f=g> with (0)\0> can be reduced to the above form (using division by >, differentiation, and linearity if ). We first notice that the coefficients of may be computed recursively using the equation <\equation> f=\+ \ \+ L*f+\+L*f. Assume that \>> and \\>> are such that \M*\>> for all . Then the equation <\equation> =|^>+ \ |^>+ M*\>*+\+M*\>*+R, is a majorant equation of () for any choices of |^>,\,|^>\\>> and \>[[z]]>, such that \|\|^>,\,\|\\|\|^>>. Let <\equation*> h=\>>. We take |^>=C*h(0)> for all {0,\,l-1}>, where <\equation*> C=max {\|\\|,\,\|\\|}\max \||h(0)>>,\,\||h(0)>>. Now we observe that <\eqnarray*> ||>*h+\+M*\>*h>>||>|(M+1)*\*(M+(l-2)*\+1)*\>+1)/\>+\+\>+1)/\>>>||>|*(M+(l-1)*\+1)*\>+1)/\>>>|||>>>> Therefore, we may take <\equation*> R=|^>+ \ |^>+ M*\>*C*h+\+M*\>*C*h-C*h\\>[[z]]. This choice ensures that () has the particularly simple solution . The majorant technique now implies that C*h>. From the algorithmic point of view, let =min {\(L),\,\(L)}> and assume that we want to compute a bound for >\|> on >> for some \>. Let \(0,1)\> be a fixed constant. Then we may apply the above computation for =s1>> with -r)*\> and \|,\,\|L\|}>. From the majoration C*h>, we deduce in particular that <\equation*> \|f\|\C*. This leads to the following effective solution of (): <\class|\>> \; <\body> <\itemize> ,\,L)\()> =(\,\,\)\()> :(\(),|~>\())\L\>, \|~>> :()\min(\(L),\,\(L))> :\\((L>,\,L>),((\),\,(\))> Like in , the keyword > stands for the current instance of the data type, which is implicit to the method. The > method is given by <\method> \(r)> <|method> \; <\body> r+\*(\()-r)> max {\|\\|,\,\|\\|}> max {\|L\|,\,\|L\|*}> Return > We have proved the following theorem: <\theorem> Let ,\,L> be effective analytic functions and let ,\,\\>>. Then there exists a faithful effective analytic solution to )> with f=Dom (L,\,L)>. In particular, if ,\,L)> is effective, then so is . Let us now consider a system of algebraic differential equations <\equation> |\ z> >>|>>|>>>>>=(f,\,f)>>|>>|(f,\,f)>>>>> with initial conditions (0)=\,\,f(0)=\> in >, where ,\,P> are polynomials in variables with coefficients in >. Modulo the change of variables <\eqnarray*> >||+>>|(f,\,f)>||(,\,)>>>> we obtain a new system <\equation> |\ z> >>|>>|>>>>>=(,\,)>>|>>|(,\,)>>>>>, with initial conditions (0)=\=(0)=0>. Let 0> and \0> be such that <\equation*> (z,\,z)\M*\>(z+\+z) for all . Then the system of differential equations <\equation> |\ z> >>|>>|>>>>>=>(+\+)>>|>>|>(+\+)>>>>> with initial conditions (0)=\=(0)=0> is a majorant system of (). The unique solution of this system therefore satisfies \> for all . Now the > really all satisfy the same first order equation <\equation*> =M*\>() with the same initial condition (0)=0>. The unique solution of this equation is <\equation*> =\==*M*z>|l*\>, which is a power series with radius of convergence <\equation*> \=*M> and positive coefficients, so that (z)\|\(r)> for any \>. As to the implementation, we may fix =1>. We will denote the transformed polynomials > by =P>>. We will also write P\<\|\|\>> for the smallest number \>> with ,\,z)\M*\>(z+\+z)>. The implementation uses a class > with information about the entire system of equations and solutions and a class > for each of the actual solutions >. <\class> <\with|mode|math> <|class> \; <\itemize> ,\,P)\((,l))> =(\,\,\)\()> > :(,|~>)\P\>, \|~>>, max(\<\|\|\>P>\<\|\|\>,\,\<\|\|\>P>\<\|\|\>)> :1\i\l\(,i)> :\\(P,((1)(\),\,(l)(\))> <\class|\>> \; <\body> <\itemize> \> {1,\,l}> :(|~>,|~>)\\\|~>>, |~>> :()\1/(2*l*(\\M))> :r\(1-\M)>)/l> :\\\\(\)\(i)> In contrast with the linear case, the domain of the solution ,\,f)> to () is not necessarily effective. Nevertheless, the solution is faithful: <\theorem> Let ,\,P> be polynomials with coefficients in >> and let ,\,\\>>. Then the system )> admits a faithful effective analytic solution ,\,f)>. <\proof> Let =[\,\,\]\Dom ,\,f)|\>\>. Let us prove by induction over ,l}> that ,\,\]\Dom> (f,\,f)>. For we have nothing to prove, so assume that 0>. For all [0,1]>, let <\eqnarray*> >||,\,\,t*\]>>|>|||\>(\),\,|\>(\))>>|>||P>\<\|\|\>,\,\<\|\|\>P>\<\|\|\>)>>>> Then > is a continuous function, which admits a global maximum > on . Now let \[\,\,\]> be such that \Dom f> and let =(\-\)/n> be such that \> and \|\1/(2*l*M)>. Then we have =\+[\,|n \>,\]\Dom f> and \[\,\,\]>. This proves that ,\,\]\Dom> (f,\,f)> and we conclude by induction. <\remark> In principle, it is possible to replace the algebraic differential equations by more general non-linear differential equations, by taking convergent power series for the >. However, this would require the generalization of the theory in this paper to analytic functions in several variables (interesting exceptions are power series which are polynomial in all but one variable, or entire functions). One would also need to handle the transformations \P>> with additional care; these transformations really correspond to the analytic continuation of the >. Using a careful definition of effective analytic functions, we have shown how to answer many numerical problems about analytic solutions to differential equations. In order to generalize the present theory to analytic functions in several variables or more general analytic functions, like solutions to convolution equations, we probably have to weaken the conditions and . Nevertheless, it is plausible that further research will lead to a more suitable definition which preserves the spirit of the present one. The program implements the approach of the present paper in the weaker setting of double precision complex numbers instead of effective complex numbers. We plan to describe this program in more details in a forthcoming paper and in particular the ``radar algorithm'' which is used to graphically represent analytic functions (see figure ). It would be nice to adapt or reimplement the program so as to permit computations with effective complex numbers when desired. |Plot of a solution to =1+4*f+f> with the program.> The implementation of the program has also been instructive for understanding the complexity of our algorithms. For instance, the lower bound for the smallest zero in our algorithm for inversion can be extremely bad, as in the example <\equation*> f= The problem here is that the effective radius of convergence is of the order of , while the real radius is . Consequently, the analytic continuation from to will take steps instead of only . In the program, this problem has been solved by using a numerical algorithm in order to determine the radius of convergence instead of the theoretically correct one. In some cases, this leads to an exponential speedup. Theoretically correct approaches for solving the problem are to compute the smallest zero of the denominator in an appropriate radius or to use transseries-like expansions. We plan to explain these approaches in more detail in a forthcoming paper. Another trick which can be used in concrete implementations is to override the default evaluation method from section by a more efficient one when possible, or to implement methods for the evaluation or computation of higher derivatives. <\bibliography|bib|alpha|ea> <\bib-list|[99]> E. Bishop and D. Bridges. . Die Grundlehren der mathematische Wissenschaften. Springer, Berlin, 1985. H. Cartan. . Hermann, 1961. D.V. Chudnovsky and G.V. Chudnovsky. Computer algebra in the service of mathematical physics and number theory (computers in mathematics, stanford, ca, 1986). In , volume 125, pages 109--232, New-York, 1990. Dekker. J. Denef and L. Lipshitz. Power series solutions of algebraic differential equations. , 267:213--238, 1984. J. Denef and L. Lipshitz. Decision problems for differential equations. , 54(3):941--950, 1989. A. G. Khovanskii. . American Mathematical Society, Providence, RI, 1991. A. Péladan-Germa. . PhD thesis, Gage, École Polytechnique, Palaiseau, France, 1997. R.H. Risch. Algebraic properties of elementary functions in analysis. , 4(101):743--759, 1975. J. Shackell. A differential-equations approach to functional equivalence. In , pages 7--10, Portland, Oregon, A.C.M., New York, 1989. ACM Press. J. Shackell. Zero equivalence in function fields defined by differential equations. , 336(1):151--172, 1993. J.R. Shackell and J. van der Hoeven. Complexity bounds for zero-test algorithms. Technical Report 2001-63, Prépublications d'Orsay, 2001. J. van der Hoeven. Fast evaluation of holonomic functions. , 210:199--215, 1999. J. van der Hoeven. Fast evaluation of holonomic functions near and in singularities. , 31:717--743, 2001. J. van der Hoeven. Zero-testing, witness conjectures and differential diophantine approximation. Technical Report 2001-62, Prépublications d'Orsay, 2001. Joris van der Hoeven. Columbus. , 2000--2002. Joris van der Hoeven. A new zero-test for formal power series. In Teo Mora, editor, , pages 117--122, Lille, France, July 2002. Joris van der Hoeven. Relax, but don't be too lazy. , 34:479--542, 2002. J. van der Hoeven. Majorants for formal power series. Technical Report 2003-15, Université Paris-Sud, 2003. S. von Kowalevsky. Zur Theorie der partiellen Differentialgleichungen. , 80:1--32, 1875. <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Ris75 DL84 DL89 Khov91 Sh89 Sh93 Pel:phd vdH:zerotest VdH:issac02 vdH:witness BB85 CC90 vdH:hol vdH:singhol DL89 vdH:Columbus vdH:relax vdH:relax vdH:relax Kov1875 Car61 vdH:maj <\associate|figure> Plot of a solution to |f=1+4*f+f> with the |Columbus> program.|> <\associate|toc> |math-font-series||1.Introduction> |.>>>>|> |math-font-series||2.Effective analytic functions> |.>>>>|> |2.1.Effective numbers and power series |.>>>>|> > |2.2.Effective germs |.>>>>|> > |2.3.Effective paths |.>>>>|> > |2.4.Effective analytic functions |.>>>>|> > |2.5.Analytic continuation along subdivided paths |.>>>>|> > |math-font-series||3.Operations on effective analytic functions> |.>>>>|> |3.1.Basic effective analytic functions |.>>>>|> > |3.2.The ring operations |.>>>>|> > |3.3.Differentiation and integration |.>>>>|> > |3.4.Inversion |.>>>>|> > |3.5.Exponentiation and logarithm |.>>>>|> > |math-font-series||4.Solving differential equations> |.>>>>|> |4.1.Linear differential equations |.>>>>|> > |4.2.Systems of algebraic differential equations |.>>>>|> > |math-font-series||5.Conclusion and final remarks> |.>>>>|> |math-font-series||Bibliography> |.>>>>|>