> <\body> ||<\author-address> CNRS, Département de Mathématiques Bâtiment 425 Université Paris-Sud 91405 Orsay Cedex France ||>|<\doc-note> This work was partially supported by the ANR Gecko project. |||>> <\abstract> Given complex numbers ,\,z>, it is classical that linear dependencies *z+\+\*z=0> with ,\,\\\> can be guessed using the LLL-algorithm. Similarly, given formal power series ,\,f\\[[z]]>, algorithms for computing Padé-Hermite forms provide a way to guess relations *f+\+P*f=0> with ,\,P\\[z]>. Assuming that ,\,f> have a radius of convergence 0> and given a real number r>, we will describe a new algorithm for guessing linear dependencies of the form *f+\+g*f=h>, where ,\,g,h\\[[z]]> have a radius of convergence R>. We will also present two alternative algorithms for the special cases of algebraic and Fuchsian dependencies. Consider an infinite sequence ,f,\> of complex numbers. If ,f,\> are the coefficients of aformal power series \[[z]]>, then it is well-known that a lot of information about the behaviour of near its dominant singularity can be obtained from the asymptotic behaviour of the sequence ,f,\>. However, if is the solution to some complicated equation, then it can be hard to compute the asymptotic behaviour using formal methods. On the other hand, the coefficients ,f,\> of such a solution can often be computed numerically up to a high order. This raises the question of how to the asymptotic behaviour of ,f,\>, based on this numerical evidence. In fact, the systematic integration of ``guessing tools'' into symbolic computation packages would be a useful thing. Indeed, current systems can be quite good at all kinds of formal manipulations. However, in the daily practice of scientific discovery, it would be helpful if these systems could also detect hidden properties, which may not be directly apparent or expected, and whose validity generally depends on heuristics. One well-known tool in this direction is the LLL-algorithm. Given numbers ,\,z\\>, it can be used in order to guess relations of the form <\equation> \*z+\+\*z=0(\,\,\\\). Given formal power series ,\,f\\[[z]]>, algorithms for the computation of Padé-Hermite forms can be used in order to guess linear relations <\equation> P*f+\+P*f=0(P,\,P\\[z]). A well-known implementation is provided by the package. Given a finite number of coefficients of a formal power series \[[z]]>, the package is able to guess a closed form formula for or a linear differential equation with coefficients in [z]> satisfied by . Unfortunately, many interesting formal power series do not admit closed form formulas and are not holonomic. In that case, we can still use asymptotic extrapolation in order to guess the asymptotic behaviour of the coefficients. However, this only provides us some rough idea about the behaviour of at its dominant singularity. In practice, it often happens that locally satisfies an algebraic or differential equation with analytic coefficients, even though these coefficients fail to be polynomials. In this paper, we will shall describe two approaches to deal with this situation. In section, we first present a numerical algorithm for approximating the radius of convergence of , assuming that only a finite number of its coefficients are known. In the case when admits a unique dominant isolated singularity >, we will also describe several approaches to find>. In section, we consider the favourable situation when there exists an algorithm for the analytic continuation of . This algorithm should allow us to compute the Taylor series expansion of not only at the origin, but at any point on its Riemann surface. This is typically the case when is the solution of an initial value problem. We will show how to exploit the mechanism of analytic continuation in order to find so called algebraic or Fuchsian dependencies at singularities. In the remainder of the paper, we let ,\,f\\[[z]]> be power series with radii of convergence at least 0>. We assume that the coefficients of the> can be computed up to a high order and with great accuracy. This is typically the case if the > are given as explicit generating functions or solutions to functional equations. Given aradius r>, we are interested in the determination of linear dependencies <\equation> gf+\+g*f=h(g,\,g,h\\[[z]]), where ,\,g,h> have radii of convergence strictly larger than . In section, we will describe an algorithm for doing this, based on Gram-Schmidt orthogonalization. Modulo the change of variables R*z>, it actually suffices to consider the case when . In section, we will present some relations which were recognized by the algorithm. In section, we will also examine the behaviour of the algorithm in the case when ,\,f> are analytically independent. Section contains a discussion of the algorithm and perspectives. Let be an analytic function which is given by its power series +f*z+f*z+\> at the origin. A first natural question is how to compute the radius of convergence =\> of at the origin, assuming that we have an algorithm for computing the coefficients >. In what follows, we will only be interested in heuristic algorithms. In general, good lower bounds for > can often be obtained efficiently, but the computation of sharp upper bounds can be undecidable. For some results on the exact computation of >, we refer to. Theoretically speaking, the radius of convergence is given by <\equation*> \1>=limsup\> \||i>. After the computation of 2> coefficients ,\,f> of , this yields the approximation <\equation> \1>\maxi\n> \||i>. For most convergent power series , this formula is ultimately exact in the sense that <\equation> \1>=lim\> maxi\n> \||i>. This is for instance the case when \|f\|> is ultimately convex or ultimately concave. The set of for which () holds is also stable under the transformation f(z)> for any \>={n\\:n\0}>. Of course, we may replace by *n> in () for any \(0,1)>. The formula() has the disadvantage that it has not been scaled appropriately: when replacing by , where \>> is such that 1>, we obtain different approximations for > and >. Therefore, it is better to replace > by /f> for some appropriate coefficient \0> with i>. One way to choose appropriate indices is to consider the numerical Newton polygon of *z+\+f*z>, where \*n\>, =1/2>. Let =(i,log \|f\|)> for i\n>, where we understand that 0=\\>. Then the Newton diagram of is the convex hull of the half lines +{0}\\>> for i\n>. There exists a minimal subset >,\,P>}\{P,\,P}> with \\\i=n-1>, such that the Newton diagram is also the convex hull of the half lines >+{0}\\>> for j\k>. Graphically speaking, the >> are the vertices of the Newton diagram. For a fixed \(\,1)>, say =3/4>, we may now determine the unique edge >P>> of the Newton diagram such that \\*n\i>, and replace the formula() by <\equation> \1>\>|f>>-i>>. For most convergent power series , the formula() is again ultimately exact. The indices \\\i> can be computed from the coefficients ,\,f> using a linear traversal in time . Modulo a tiny extra cost, this enables us to compute a more accurate approximation of> than(). The formula() has been implemented in the system. For 64>, the computed radius is usually correct up to one decimal digit. The formula() usually yields a reasonable estimate for >, even in very degenerate cases when there are several singularities at distance > or close to >. If admits an isolated singularity > at distance >, with no other singularities at distance close to >, then the quotient /f> usually tends to1>> for large . When the singularity at > has a known type, then the approximation 1>=f/f> can be further improved. For instance, in the case of algebraic singularities, we have <\equation*> f\\n>*n*(a+a*n1/p>+a*n2/p>+\), with \> and ramification index \>>, whence <\equation> log f\(\log \)*n+*log n+log a+|a>*n1/p>+\. Using the E-algorithm, we may now compute simultaneous approximations for the first coefficients log \>, , >, /a>, of the expansion(). It turns out that this strategy greatly improves the accuracy of the approximation of > (see also). Similarly, we say that is Fuchsian at >, if satisfies a linear differential equation <\equation*> L*\ f+\+L*f=0, where =z*\/\ z> and ,\,L> are analytic functions at > with (\)\0>. In that case, the Taylor coefficients > satisfy the asymptotic expansion <\equation*> f\\n>*n>*(a(log n)+a(log n)*n1/p>+a(log n)*n2/p>+\), where \\>, \>> and the > are polynomials in of degrees > r>. Again, the E-algorithm or more general algorithms for asymptotic extrapolation can be used to compute > with a high accuracy. Notice that these algorithms also provide estimates for the accuracies of the computed approximations. In the case when the closest singularity > is a simple pole, we may directly try to find the polynomial >. This polynomial has the property that the radius of convergence of is strictly larger than \|>. More generally, if is meromorphic on the compact disc|\>> of radius , then we may search for a polynomial such that the radius of convergence of is strictly larger than . This can be done using simple linear algebra, as follows: <\algorithm|denom> the first 2*D> coefficients of , a radius and a degree bound an approximation of a monic polynomial with \r>, > chosen of minimal degree D>, or failed <\body> [Initialize] <\indent> 0> [Determine ] <\indent> Solve the linear system <\equation> >|>|>>|>||>>|>|>|>>>>>*>>|>>|>>>>>+>>|>>|>>>>>=0 \; Set z+P*z+\+P> [Terminate or loop] <\indent> Heuristically determine \\>, based on the first coefficients of If \r> then return If then return failed Set d+1> and go to step 2 <\theorem> The algorithm > is correct. <\proof> Modulo a scaling >, we may assume without loss of generality that . Furthermore, the proof is clear if , so we will assume that \1>. Assume that there exists apolynomial with \1> and choose <\equation*> Q=z+Q*z+\+Q monic and of minimal degree . Notice that such a polynomial exists in particular, if() admits no solution for some . We have <\equation> >|>|>>|>||>>|>|>|>>>>>*>>|>>|>>>>>+>>|>>|>>>>>=>>|>>|>>>>>. Since we assumed that \1>, the coefficients =O(\)> have exponential decay for some \1>. Now consider a polynomial of degree > d> and let , so that <\equation*> H*>>|>>|>>>>>\>|>|>>|>||>>|>|>|>>>>>*>>|>>|>>>>>=>>|>>|>>>>>. By the minimality hypothesis of, we have \1>, whence the coefficients > have polynomial or exponential growth in . Since this property holds for all , it follows that the matrix norm H1>\<\|\|\>> remains bounded for large . In particular, the solution to() is only an exponentially small perturbation of the solution to() for large values of . <\remark> The algorithm uses a very simple -step search for the optimal degree . Using a binary search (doubling at each step at a first stage, and using a dichotomic search at a second stage), the number of steps can be reduced to . Let be an analytic function which is given by its power series +f*z+f*z+\> at the origin. Assume that can be continued analytically on a Riemann surface > above the closed disk |\>> of radius minus a finite set of points ,\,\>. Let > be the set of analytic functions on |\>>. We say that is algebraic on|\>> if only admits algebraic singularities inside |\>>. This is the case if and only if there exists a polynomial dependency <\equation> P*f+\+P=0, where \[F]>. In that case, we may normalize the relation such that has minimal degree and such that > is a monic polynomial of minimal degree. In particular, all roots of > are inside the disc |\>>. We say that is Fuchsian on|\>>, if only admits Fuchsian singularities inside |\>>. This implies the existence of a dependency <\equation> L*f+\+L*f=0, where \[\]>. Again, we may normalize () such that has minimal order and such that > is a monic polynomial of minimal degree. Assuming that we have a high accuracy algorithm for the analytic continuation of , we will give heuristic algorithms for the computation of dependencies of the form() or(), if they exist. <\remark> Algorithms for analytic continuation typically exist when is the solution of an initial value problem. For aprecise definition of a computable analytic function, we refer to. For the heuristic purposes of this section, the existence of anumerical algorithm for the continuation of will be sufficient. Assume that is algebraic on |\>>. For each singularity >, choose a path > from the origin to a point near > which avoids the other singularities, and let > be the operator which performs an analytic continuation along >, one turn around >, followed by an analytic continuation along 1>>. In the following algorithm for the detection of algebraic dependencies, we assume a heuristic equality test for analytic functions at a point, for instance by checking -bit equality of the first terms of the Taylor series expansions. <\algorithm|alg_dep> ,\,\),N,D,B)> an analytic function above |\>\{\,\,\}> and bounds , and a normalized algebraic dependency() with B> and \D>, or failed <\body> [Initialize] \{f}>> <\body> [Saturate] <\indent> If \\\> for all , then go to step 3 If )\N> then return failed \C \\\\C \> Repeat step 2 <\body> [Terminate] <\indent> Denote \{\,\,\}> Compute (F-\)*\*(F-\)=Q*F+\+Q> For each {0,\,k}>, compute \(Q,r,D)> If =failed> for some , then return failed Return ,\,D)*Q> <\theorem> The algorithm > is correct. <\proof> Assume that satisfies a normalized relation(), with B> and \D>. Since > only contains distinct roots of , we have \\>F-\)\|P> in ((z))[F]> throughout the algorithm. In particular )\d>, and we ultimately obtain stabilization \\\\C \\\>. At this point, analytic continuation around any of the points > leaves the polynomial invariant, so the coefficients of are analytic and single-valued on|\>\{\,\,\}>. On the other hand, given a singularity >, each solution \\> is also given by a convergent Puiseux series near >, whence so are the coefficients of . Since the only Puiseux series without monodromy around > are Laurent series, it follows that the coefficients of are meromorphic on |\>>. By the minimality assumption on, it follows that and *Q>. Since each coefficient =P/P> is meromorphic on |\>>, we may use the algorithm from the previous section in order to compute a polynomial > of minimal degree \deg P\B> such that *D\\>. The monic least common multiple ,\,D)> is nothing but the monic polynomial > of minimal degree such that *Q\\[F]>. <\remark> As a safeguard, we may heuristically compute the radii of convergence of ,\,P> and check whether they are indeed superior to . <\algorithm|fuch_dep> ,\,\),N,D,B)> an analytic function above |\>\{\,\,\}> and bounds , and a normalized Fuchsian dependency() with B> and \D>, or failed <\body> [Initialize] \{f}>> <\body> [Saturate] <\indent> If \)\Vect(\)> for all , then go to step 3 If )\N> then return failed \\\{C \}> for and \\> with \\Vect(\)> Repeat step 2 <\body> [Terminate] <\indent> Denote \{\,\,\}> Compute lcm(\-\>,\,\-\>)=K*\+\+K> ((z))[\]>, where >> denotes /\>> For each {0,\,k}>, compute \(K,r,D)> If =failed> for some , then return failed Return ,\,D)*K> <\theorem> The algorithm > is correct. <\proof> Assume that satisfies a normalized Fuchsian relation(), with B> and \D>. Throughout the algorithm, the set > only contains linearly independent solutions to =0>. Therefore, the smallest operator \\>(\-\>)> which vanishes on> divides in ((z))[\]>. In particular )\r>, and we ultimately obtain stabilization \+\+C \\\>. Consider one of the singularities >. Since is Fuchsian at >, the equation admits a fundamental system of solutions of the form <\equation*> \(\+u)=\(u)*(log u)+\+\(u)*u>, where \\> and ,\,\> are convergent power series. The coefficients of lay in the field > generated by all generalized power series of this form. Again, the only elements of> with a trivial monodromy around > are convergent Laurent series at >. Since analytic continuation around > leaves the operator unchanged, it follows that the coefficients of are meromorphic on |\>>. We conclude in a similar way as in the proof of theorem. <\remark> The hypothesis that admits at worse a Fuchsian singularity at > is essential for the algorithm to work. For instance, in the case of the function <\equation*> f=\>>+\>>, the monodromy around > is trivial, whence applying the algorithm for \|\\|> would simply result in having ={f}> at step 3. Although >> has no monodromy around >, this function is no longer a Laurent series. In fact, the desired vanishing operator has second order in this case, but it cannot be read off directly from >. So far, our algorithms assume that we know the locations of the singularities ,\,\> inside the disc |\>>. Using the techniques from the previous section, we still need an algorithm to determine these locations. Assume that we have localized some of the singularities ,\,\> and that > has been stabilized under the corresponding operators >. Given one of the singularities >, one subtask is to determine a small radius > such that > does not admit other singularities above the disc with center > and radius >. For a given candidate >, this condition can be checked as follows. We choose a sufficient number of equally spaced points <\equation*> \=\+\*\*k|A>>*\ on the Riemann surface of > above the circle with center > and radius >. For each \\> and each point >, we may use the techniques from the previous section in order to determine the closest singularity +\,k>> of > to >. Now our check succeeds if +\,k>\\> or +\,k>-\\|\\> for all > and >. If it fails, then we retry with a smaller>. As soon as > becomes smaller than a fixed threshold, then we abort. A second subtask is to guarantee the absence of singularities on the remaining part of the Riemann surface of > above |\>>. We construct a sequence of open balls ,\,\> as follows. For s>, we let > be the ball with center > and radius >. The remaining > are constructed by induction over . We arbitrarily chose the center > of > in |\>\(\\\\\)> and heuristically determine the minimum > of the convergence radii of the elements in > above >. If > is significantly smaller than =min(\|\-\\|,\,\|\-\\|,\|r-\|\\|\|)>, then this indicates a missing singularity: we heuristically determine the closest singularity > to> and add it to the set ,\,\}>. Since > was chosen arbitrarily, there usually is at most one closest singularity. As an additional security check, we may also determine the closest singularity to +\)/2> and check that it coincides with >. If \\/2>, then we take /2> to be the radius of > and continue our construction. Since |\>> is compact, the construction will eventually terminate, unless we find a new singularity. The above algorithms either enable us to find new singularities above |\>> or obtain acertificate that there are no other singularities. We may insert them in the saturation steps of and , just before jumping to step 3: we only jump if > is saturated under the > and no new singularities are found. We keep repeating step 2 in the contrary case. Of course, we may introduce an additional threshold parameter in order to abort the algorithm whenever S>. Given an order \>, we will denote by [[z]]> the set of power series \[[z]]> with =0> for all n>. When truncating the usual multiplication on [[z]]> at order , we give [[z]]> a ring structure, which is isomorphic to [z]/(z)>. We will denote by [[z]]>> the Hilbert space of all power series \[[z]]> with finite > norm <\equation*> \<\|\|\>f\<\|\|\>=\|+\|f\|+\|>. More generally, the norm of a vector ,\,f)\\[[z]]>> is given by <\equation*> \<\|\|\>f\<\|\|\>=f\<\|\|\>+\+\<\|\|\>f\<\|\|\>>. The spaces [[z]]\\[[z]]\\> can be considered as an increasing sequence of Hilbert spaces, for the restrictions of the norm on [[z]]>> to the [[z]]>. Let 1> and assume that ,\,f\\[[z]]> are power series with radii of convergence at least . We want to solve the equation <\equation> g*f+\+g*f=h(g,\,g,h\\[[z]]>). We will denote the affine space of all such relations ,\,\)=(g,\,g,h)\\[[z]]>> by >>. Since the equation() involves an infinite number of coefficients, we need to consider its truncated version at a finite order \>. Replacing ,\,f> by their truncations in [[z]]>, we search for non-trivial solutions of the equation <\equation> g*f+\+g*f=h(g,\,g,h\\[[z]]), such that the norms of ,\,g)> and are small. We will denote by > the affine space of all ,\,\)=(g,\,g,h)\\[[z]]> which satisfy(). Let us reformulate our problem in terms of linear algebra. The series ,\,f> give rise to an (n+n*d)> matrix <\equation*> M=>|||||||||||||||>|>||||||>|||||||||>|>|||||||||||||||>|>|>||||||||||||||>|>|>||||||||>||||||>|>|>||||||||||||||>|>|>|>|||||||||>||||>|>|>||>|||||||||>|||>|>|>|>|>|>|||||||||||>|>|>|||>||||||||||>|>|>|>|>|>|>|||||||||||>>>>. The unknown series ,\,g> give rise to a row vector <\equation*> G=>|>|>|>|>|>|>|>>>>>. Setting *f+\+g*f>, we then have <\equation*> G*M=>|>|>|>>>>, whence =G*M> encodes the relation ,\,\)=(g,\,g,h)>. This reduces the problem to finding those vectors for which G*M\<\|\|\>> is small, provided that at least one of the coefficients ,\,g> is reasonably large. We start with the computation of a thin LQ decomposition of . This can for instance be done using the Gram-Schmidt process: starting with the first row, we orthogonally project each row on the vector space spanned by the previous rows. This results in adecomposition <\equation*> M=L*Q, where is a lower triangular n*d> matrix with ones on the diagonal and is an (n+n*d)> matrix, whose rows are mutually orthogonal ( >=Id>). Since 1>> coincides with the righthand side of , the decomposition can be done in-place. Now consider the matrix> formed by the last rows of 1>>. Then each row > gives rise to a relation(), encoded by =\*M>. Moreover, this relation is or -normal>, in the sense that =1> and =0> for all i>. Since *M> is the shortest vector in +Vect(M,\,M)>, the relation is also minimal in norm, among all -normal relations. Choosing the row > for which \*M\<\|\|\>> is minimal, our algorithm simply returns the corresponding relation. Then our algorithm has the following fundamental property: <\proposition> The algorithm returns a normal relation for which G*M\<\|\|\>> is minimal. Let us now return to the case when ,\,f\\[[z]]> are no longer truncated, but all have aradius of convergence r>. A relation() is again said to be or -normal> if =1> and =0> for all i>. Under the limit \>, we claim that our algorithm finds a minimal normal relation, if there exists a relation of the form(): <\theorem> \; <\enumerate-alpha> Assume that >\0>. Then >> contains aminimal normal relation. Assume that \\>> is a minimal -normal relation. For each \>, let > be the truncation of at order and consider the corresponding minimal -normal relation\\>. Then the relations > converge to > in [[z]]>>. If >=0>, then the norms \\<\|\|\>>, with > as in )>, are unbounded for \>. <\proof> A non trivial relation =(g,h)> is easily normalized: we first divide by >, where {val g,\,val g}>. We next divide by >, where is largest with \0>. Now the set of all -normal relations is a closed affine subspace of [[z]]>>. The orthogonal projection of on this subspace yields an -normal relation of minimal norm. This proves(). Assume that there exists an -normal relation )>. Given an order \>, consider the minimal -normal relation > at this order. Truncation of this relation at asmaller order n> yields an -normal relation > at order with \\-\>, whence <\equation> \<\|\|\>\\<\|\|\>=\<\|\|\>\\<\|\|\>+\<\|\|\>\-\\<\|\|\>. Moreover, since > is the projection of on the affine space of -normal relations at order, we have \\-\> and <\equation> \<\|\|\>\\<\|\|\>=\<\|\|\>\-\\<\|\|\>+\<\|\|\>\\<\|\|\>. In particular, \\<\|\|\>\\<\|\|\>\\<\|\|\>\\<\|\|\>\\<\|\|\>> and <\equation*> \<\|\|\>\\<\|\|\>\\<\|\|\>\\<\|\|\>\\\\<\|\|\>\\<\|\|\>. For large , it follows that \\<\|\|\>-\<\|\|\>\\<\|\|\>\0>, whence <\equation*> \<\|\|\>\-\\<\|\|\>\\<\|\|\>\-\\<\|\|\>+\<\|\|\>\-\\<\|\|\>\0. In other words, > is aCauchy sequence, which converges to a limit |~>\\[[z]]>>. By the minimality hypothesis of >, we have |~>\<\|\|\>=\<\|\|\>\\<\|\|\>>. Passing() and() to the limit, we get |~>-\\<\|\|\>=0>, whence |~>=\> and(). In general, the existence of a bound with <\equation*> \<\|\|\>\\<\|\|\>\\<\|\|\>\\<\|\|\>\\\B still ensures > to be a Cauchy sequence, and its limit yields an -normal relation(). This proves the last assertion (). A first implementation of our guessing algorithm has been made in the system. It is instructive to test the algorithm on examples for which the existence of linear dependencies is known. We have done this for three examples of increasing difficulty. In order to avoid problems due to numerical instability, we have used a large working precision, of 1024 or 2048 bits. The easiest example on which we can run the algorithm is <\eqnarray*> >||*z>(\\1).>>>> Here follow the values for > at different orders and =2>: <\eqnarray*> >||-0.20000*z>>|>||-\-5.8340*10*z-1.9447*106>*z>>|>||-\-5.0484*10*z-1.6828*1026>*z>>|>||-\-2.8307*10*z-9.4357*10107>*z>>>> The > clearly convergence to a limit > with radius of convergence 1>. It should be noticed that we do not have =1-\*z>. In other words, we did not find the ``best'' relation, as in the case of Padé-Hermite approximation. A closer examination of the result showsthat <\eqnarray*> |>||*z|1-\*z>>>|+>>||+>(\\1)>>>> In particular, > decreases if > increases. Our second example concerns a logarithmic singularity: <\eqnarray*> >||>|>||>>>> We obtain: <\eqnarray*> >||-\-0.010473*z+0.0033638*z>>|>||-\-0.0071095*z+0.0033638*z>>|>||-\-0.0083494*z+0.0012131*z>>|>||-\-0.0071362*z+0.0012131*z>>|>||+0.99781*z-1.6176*z-\-2.3908\104>*z+1.6114\105>*z>>|>||-\-2.2297\104>*z+1.6114\105>*z>>|>||10+1.00000*z-1.6204*z-0.24832*z-0.75949*z-0.31293*z-\>>|||10*z-9.5406\10*z-9.3630\10*z-9.3108\10*z-\>>|||10*z+2.7475\10*z-1.9342\108>*z+6.2727\1010>*z>>|>||-0.21949*z-0.25182*z-0.016358*z-\>>|||10*z-9.8597\10*z-8.6828\10*z-7.3899\10*z-\>>|||10*z+2.5583\10*z-1.8714\108>*z+6.2727\1010>*z>>>> The convergence only becomes apparent at higher orders. Contrary to the previous example, it seems that the series in the computed relation all have a radius of convergence . The norms of the computed results are given by \\<\|\|\>=3.5577>, \\<\|\|\>=3.5955>, \\<\|\|\>=3.6739> and \\<\|\|\>=3.6862>. A more interesting example from combinatorics is the enumeration of the number of alcohols of the form HO H> . Its generating series satisfies the functional equation <\equation*> s(z)=1+z*+2*s(z)|3>. Using asymptotic extrapolation, this series is found to have a radius of convergence 0.304218409>. In order to investigate close to its dominant singularity, we apply our algorithm to <\eqnarray*> >||>|>||>>>> The translation z+0.25> is done using power series evaluation until stabilization at the working precision. At different orders, we obtain: <\eqnarray*> >||+\+0.022645*z-0.0029284*z>>|>||-\+0.0061880*z-9.9191\104>**z>>|>||-\+1.6968\10*z-1.0104\105>*z>>|>||-\+5.2359\10*z-3.4225\106>*z>>|>||-\+9.4113\10*z-3.6428\108>*z>>|>||-\+3.0034\10*z-1.2339\108>*z>>|>||-0.052535*z+0.033518*z-\>>|||10*z-3.4222\10*z-1.2120\10*z-1.5462\10*z-\>>|||10*z-7.4945\10*z+4.6860\10*z-1.3437\1010>*z>>|>||-0.0067226*z-0.080434*z+\>>|||10*z-3.4490\10*z-2.9260\10*z-1.1090\10*z+\>>|||10*z-2.3161\10*z+1.5192\10*z-4.5516\1011>*z>>>> The corresponding norms are given by \\<\|\|\>=4.3276>, \\<\|\|\>=4.3845>, \\<\|\|\>=4.3863> and \\<\|\|\>=4.3863>. Again, the convergence is rather slow, and the computed series all seem to have radius of convergence . It is also instructive to run the algorithm on examples where the > are known to be analytically independent. In that case, it is important to examine the behaviour of the norms \\<\|\|\>> for \> and find more precise criteria which will enable us to discard the existence of a nice dependency with a reasonable degree of certainty. Let us return to the case of a simple logarithmic singularity <\equation*> f=log(1-\*z), where \1>. Running our algorithm directly on this function yields the following results for =2>: <\eqnarray*> >||+\+1.3621*z-0.68197*z>>|>||-\+4.1515*z-0.67813*z>>|>||-\+9.2483*z-0.65589*z>>|>||-\+19.957*z-0.66367*z>>>> The results indeed do not converge and the corresponding sequence of norms \\<\|\|\>=12.324>, \\<\|\|\>=77.306>, \\<\|\|\>=3383.9> and \\<\|\|\>=6.4461\10> diverges at moderate speed. In the above example, it is interesting to study the rate of divergence of the sequence of norms for other values of >. More generally, we can consider various types of singularities, such as <\eqnarray*> >>||*z)>>|>>||*z>>>|>>||*z|1-\*z>>>>|>>||*z)>>>> The series +\*z+\*z+\> is a series whose coefficients are chosen according to a random uniform distribution on . The results are shown in table below. For >>, >> and >>, it seems that the norm does not much depend on the precise type of singularity, but only on > and the truncation order . For the last series >>, the dependencies on > and approximately seem to follow the law <\equation> log \<\|\|\>\\<\|\|\>\*log \. This idealized law needs to be adjusted for functions >>, >>, >> with more common types of singularities. Although () remains asymptotically valid for large values of >, the factor > needs to be replaced by a smaller number for moderate values. For \\\4>, this factor rather seems to be of the form *(log \)>, where the constant > depends on the nature of the singularity. Also, the linear dependency of \\<\|\|\>> on is only reached for large values of . <\big-table||||||||>|>>>>|>|>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>>>|>|>|>|10>>>|>>|>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>>>|>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>>>>> Computed values of \\<\|\|\>> for various types of singularities >> and orders . 1>>We have also applied our algorithm jointly to several>> as above and studied the dependency of \\<\|\|\>> on . The results are shown in table below. In the bottom rows, >>,\>\>,\> stand for distinct uncorrelated random series *z)>. In that case, the relation () generalizes to <\equation> log \<\|\|\>\\<\|\|\>\*log \. It also seems that the law can be adapted to functions with more common types of singularities, along similar lines as before. <\big-table||||||||>|>>|10>>|10>>|10>>>|,\>>|10>>|10>>|10>>>|,\,\(\z)>>|10>>|10>>|10>>>|,\,\(\z),\(\z)>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>>|,\>>>|10>>|10>>|10>>>|,\>,\\>>>|10>>|10>>|10>>>|,\>,\\>,\\\>>>|10>>|10>>|10>>>>>>> Computed values of \\<\|\|\>> for different input vectors of increasing dimensions . In the case when a linear relation() exists, we have observed in section that our algorithm usually returns a relation whose series all have radius of convergence . Larger radii of convergence can be obtained simply by running the algorithm on and scaling back the result. In the example of the enumeration of alcohols, we have also mentioned the fact that the detection of dependencies is improved by zooming in on the singularity. Although we used a shift z+c> for doing so, this is quite expensive in general: if \|> is the module of the singularity >, then approximately 1>(r/\|c\|))*n> Taylor coefficients of are needed in order to accurately compute coefficients of . When possible, it is better to postcompose with a suitable analytic function (z)=\*z+\*z+\>. Here> should be chosen in such a way that \> has 1>(\)> as its only singularity in the unit disk, and such that 1>(\)\|> is as small as possible. Also, if satisfies a differential equation, then can be expanded efficiently at by integrating the equation from its initial conditions. Our empirical observations in the section suggest a few criteria for detecting the non-existence of dependencies(). First of all, when running the algorithm for different orders, we should observe a more or less geometric increase of the norm \\<\|\|\>>. If we know the norm 1>> of the smallest singularity, then this idea may be refined by comparing the computed values of \\<\|\|\>> with the expected values, as given by the law(), or a suitable adaptation of this law when > becomes small. This method is most effective when > is large. When possible, it is therefore recommended to zoom in on the singularity, as discussed above. For any numerical checks based on the law() or a refinement of it, we also recommend to precondition the input series ,\,f>. In particular, we recommend to multiply each> by a suitable constant, ensuring that <\equation*> maxn>\|f\|*r=1, whenever we apply our algorithm at order . Here is computed using one of the algorithms from section. Let us now analyze the cost of the algorithm from section. The current working precision should in general be taken larger than \> in order to keep the method numerically stable. Denoting by (p)> the cost for multiplying two bit numbers, a naive implementation of the Gram-Schmidt orthogonalization procedure yields a total cost of *n*(p))>. Denoting by (k,p)> the cost of multiplying two k> matrices with bit entries, and using a blockwise Gram-Schmidt procedure, we obtain the better bound (d*n,p))>. However, the matrix from section has a very special form. With more work, it might therefore be possible to save an additional factor , but we have not actively tried to do so yet. Since it is often possible to zoom in on the singularity, we may evaluate the computational cost in terms of the desired ``output quality''. As a definition of the output quality, we may take the expected value *log \> of \\<\|\|\>> in the case when ,\,f> are independent. In terms of , the time complexity than becomes <\equation*> O|log \>,q*d=+1>*d+1>|(log \)>>=*d|(log \)>, where \2.376\3> is the exponent of matrix multiplication. The complexity bound makes it clear that we should put a lot of effort into keeping > large. For instance, zooming in on the singularity using a shift makes it possible to replace > by =\*\> modulo the evaluation of at a larger order \(1+log \)*n> instead of (and a bit precision \(1+log \)*p> instead of ). In many practical cases, this can be done in time (n*p)>, which allows for a drastic reduction of the complexity. Of course, a possible analytic relation will only be obtained on a small disk around the singularity of interest. To go short, our algorithm from section mainly works well if the following conditions are satisfied: <\enumerate> The number should not be too large. There exists a large constant \1> for which all singularities of are concentrated inside the disk of radius 1>> or outside the disk of radius >. If we are interested in dependencies near an isolated singularity, then the second condition can often be achieved by zooming in on the singularity. As a final note, we mention the fact that linear analytic dependencies can sometimes be obtained using the process of asymptotic extrapolation. For instance, given afunction with an isolated smallest singularity at of the form <\equation*> f(z)=a(z)+b(z)*log , where and are analytic at , the asymptotic extrapolation of +f*z+\> yields an asymptotic expansion of the form <\equation*> f\|n>+|n>+\. Using singularity analysis, we may then recover the function from the coefficients ,c,\>. However, this technique only works in special cases, since the first > terms of the asymptotic expansion may hide other terms, which need to be taken into account when searching for exact dependencies. <\bibliography|bib|alpha|all.bib> <\bib-list|vdH02b> R.P. Brent and H.T. Kung. Fast algorithms for manipulating formal power series. , 25:581--595, 1978. B.Beckermann and G.Labahn. A uniform approach for the fast computation of matrix-type Padé approximants. , pages 804--823, 1994. C.Brezinski and R.Zaglia. . North-Holland, Amsterdam, 1991. H.Derksen. An algorithm to compute generalized padé-hermite forms. Technical Report Rep. 9403, Catholic University Nijmegen, January 1994. J.Denef and L.Lipshitz. Decision problems for differential equations. , 54(3):941--950, 1989. P.Flajolet and R.Sedgewick. . Addison Wesley, Reading, Massachusetts, 1996. G.Pólya. Kombinatorische Anzahlbestimmungen für Gruppen, Graphen und chemische Verbindungen. , 68:145--254, 1937. B.Salvy and P.Zimmermann. Gfun: a Maple package for the manipulation of generating and holonomic functions in one variable. , 20(2):163--177, 1994. J.vander Hoeven. Relax, but don't be too lazy. , 34:479--542, 2002. J.vander Hoeven et al. Mathemagix, 2002. . J.vander Hoeven. Algorithms for asymptotic extrapolation. , 2006. Submitted. J.vander Hoeven. On effective analytic continuation. , 1(1):111--175, 2007. E.J. Weniger. Nonlinear sequence transformations: Computational tools for the acceleration of convergence and the summation of divergent series. Technical Report math.CA/0107080, Arxiv, 2001. H.S. Wilf. . Academic Press, 2nd edition, 2004. . <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > |\>|?>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Pol37 Wilf04 FS96 BL94 Der94 SZ94 vdH:extrapolate vdH:riemann DL89 vdH:riemann vdH:mmx Wen01 BZ91 vdH:extrapolate vdH:extrapolate vdH:riemann vdH:mmx Pol37 vdH:extrapolate BK78 vdH:relax vdH:extrapolate FS96 <\associate|table> <\tuple|normal> Computed values of |\<\|\|\>\\<\|\|\>> for various types of singularities |\>> and orders |n>. > <\tuple|normal> Computed values of |\<\|\|\>\\<\|\|\>> for different input vectors of increasing dimensions |d>. > <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Localization of singularities> |.>>>>|> |math-font-series||font-shape||3.Detecting dependencies via analytic continuation> |.>>>>|> |math-font-series||font-shape||4.Detecting dependencies via orthogonalization> |.>>>>|> |math-font-series||font-shape||5.Examples of recognized relations> |.>>>>|> |Single poles |.>>>>|> > |Logarithmic singularity |.>>>>|> > |Number of alcohols |.>>>>|> > |math-font-series||font-shape||6.Rate of divergence in case of independence> |.>>>>|> |Logarithmic singularity |.>>>>|> > |Various singularities |.>>>>|> > |Various singularities and |d\1> |.>>>>|> > |math-font-series||font-shape||7.Discussion and perspectives> |.>>>>|> |math-font-series||font-shape||Bibliography> |.>>>>|>