> <\body> ||<\author-affiliation> CNRS, Département de Mathématiques Bâtiment 425 Université Paris-Sud 91405 Orsay Cedex France ||>>|<\doc-note> This work was partially supported by the ANR Gecko and ANR-09-JCJC-0098-01 projects, and by the Digiteo 2009-36HD grant and Région Ile-de-France. |>> Given complex numbers ,\,z>, it is classical that linear dependencies *z+\+\*z=0> with ,\,\\\> can be guessed using the LLL-algorithm. Similarly, given formal power series ,\,f\\|]>>, algorithms for computing Padé-Hermite forms provide a way to guess relations *f+\+P*f=0> with ,\,P\\>. Assuming that ,\,f> have a radius of convergence 0> and given a real number r>, we will describe a new algorithm for guessing linear dependencies of the form *f+\+g*f=h>, where ,\,g,h\\|]>> have a radius of convergence R>. We will also present two alternative algorithms for the special cases of algebraic and Fuchsian dependencies. ||> Consider an infinite sequence ,f,\> of complex numbers. If ,f,\> are the coefficients of aformal power series \|]>>, then it is well-known that the asymptotic behaviour of the sequence > is closely related to the behaviour of the generating function > near its dominant singularity. Now, if is the solution to some complicated equation, then it can be hard to compute the asymptotic behaviour using formal methods. On the other hand, the coefficients ,f,\> of such a solution can often be computed numerically up to a high order. With this numerical evidence at hand, it is natural to raise the following questions: <\enumerate> Can we the asymptotic behaviour of ,f,\>? Can we guess the behaviour of > near its dominant singularity? These questions can be regarded as part of the construction of a more general toolbox for the ``experimental mathematician''. More specifically, we advocate the systematic integration of ``guessing tools'' into symbolic computation packages. Indeed, current systems can be quite good at all kinds of formal manipulations. However, in the daily practice of scientific discovery, it would be helpful if these systems could also detect hidden properties, which may not be directly apparent or expected. Furthermore, the guessing tool is allowed to be heuristic, so that it only hidden properties; at asecond stage, one may then search for full proofs using other techniques. One well-known tool in this direction is the LLL-algorithm. Given numbers ,\,z\\>, it can be used in order to guess relations of the form <\equation> \*z+\+\*z=0,\,\\\|)>. Given formal power series ,\,f\\|]>>, algorithms for the computation of Padé-Hermite forms can be used in order to guess linear relations <\equation> P*f+\+P*f=0,\,P\\|)>. A well-known implementation is provided by the package. Given a finite number of coefficients of a formal power series \|]>>, the package is able to guess a closed form formula for or a linear differential equation with coefficients in > satisfied by . Indeed, it suffices to take =f,f=f,\,f=f>> in() in order search for small linear differential equations satisfied by . Unfortunately, many interesting formal power series do not admit closed form formulas and do not satisfy linear differential equations with polynomial coefficients. In that case, we can still use asymptotic extrapolation in order to guess the asymptotic behaviour of the coefficients. However, this only provides us some rough idea about the behaviour of at its dominant singularity. In practice, it often happens that locally satisfies an algebraic or differential equation with analytic coefficients, even though these coefficients fail to be polynomials. For instance, combinatorics is full with examples of generating functions which are not algebraic, but whose dominant singularities are algebraic. In this paper, we will describe two approaches to detect analytic dependencies on acompact disk: the first one assumes that we have an algorithm for the analytic continuation of and relies on the monodromy of at its singularities. The second approach is purely numerical and makes no special assumptions on . On our way, we will encounter various interesting related problems, such as the determination of the radius of convergence or the singularities of. We will also propose heuristic solutions to these problems, thereby extending the basic toolbox for experimenting with analytic functions. Since all algorithms in this paper are directly or indirectly based on heuristics, it is important to investigate how much confidence we can attach to the computed results. In section, we will present a survey of different kinds of heuristic algorithms which are used in symbolic computation. Some of these heuristics are relatively benign when compared to others, so it will be useful to have some general insights on the reliability of different types of heuristic algorithms. The remainder of the paper is divided into two main parts. In the first part (sections, and), we will develop some basic tools for later use. In the second part, we turn to the main topic of this paper: the disclosure of local analytic relations between analytic functions. Two main types of analytic input functions will be considered: <\description-aligned> Functions for which we can merely compute a large number of the Taylor coefficients ,f,f,\> at the origin. Functions for which we have a reasonably efficient algorithm for their analytic continuation, so that we can also compute the Taylor coefficients of at other points besides the origin. The second situation typically arises when is the solution of a differential or more general functional equation; see section for a further discussion on analytic continuation. One of the most basic problems concerning an analytic function which is only known by its Taylor coefficients, is to determine its radius of convergence. In section, we will present two methods. The first one is related Cauchy-Hadamard's formula and provides rough approximations under mild assumptions on . The second method provides much better approximations, but only works if admits a single dominant singularity of asimpletype. Building on the algorithm for approximating the radius of convergence of , we next turn our attention to the problem of locating its singularities. In section, we first restrict ourselves to the dominant singularities of ( the singularities of minimal norm). Assuming , we next present algorithms for the exploration of the Riemann surface of beyond its dominant singularities. In the special case when is meromorphic on a compact disk |\>\\:\r|}>>, section contains a special purpose algorithm for the determination of a polynomial such that is analytic on |\>>. This algorithm works under the hypothesis and induces better algorithms for the problems in sections and in this special case. In the second part of this paper, we turn to the detection of dependencies between analytic functions on a compact disk |\>>. More precisely, assume that ,\,f\\|]>> are fixed convergent power series. We are interested in the determination of linear dependencies of the form <\equation> gf+\+g*f=h,\,g,h\\|]>|)>, where ,\,g,h> are analytic on |\>>. Modulo a scaling \f>, we may assume without loss of generality that . In section, we assume and first consider the two special cases of algebraic and Fuchsian dependencies. In the case of algebraic dependencies, we take =\> for a fixed function >, and also require that . In the case of Fuchsian dependencies, we take =\>> for a fixed function >, and again require that . The second algorithm only succeeds in the case when all singularities of on the disk are of Fuchsian type (see sections and for detailed definitions). The main idea behind the method is to compute the set of of functions generated by > and its analytic continuation around its singularities in |\>>. If there exists an algebraic dependency, then this set is finite and we may use it to find the dependency. If there exists a Fuchsian dependency, then the set is contained in finite dimensional vector space, and we may construct the dependency from a basis of this vector space. In section, we only assume and consider the general problem of determining dependencies of the form(). We will describe a purely numerical algorithm based on Gram-Schmidt orthogonalization for finding such relations. The idea is to simply truncate the power series expansions and then apply the numerical method. We will prove that an asymptotic convergence theorem which shows that this approach is at least correct ``at the limit''. In the last section, we will present some numerical experiments with the algorithm from section. In section, we first consider some examples of relations which were recognized by the algorithm. In section, we will also examine the behaviour of the algorithm in the case when ,\,f> are analytically independent. In fact, the algorithm from section still requires some human cooperation for deciding whether we really found a relation. Based on the numerical experiments, we conclude with some perspectives for the design of a fully automatic algorithm. <\acknowledgments*> We would like to thank the three referees for their detailed and valuable comments on a first version of this paper. As soon as we enter the area of heuristic and non mathematically proven algorithms, it is natural to ask how much confidence can be attached to the output. In fact, there is a large spectrum of situations which can occur, depending on the precise nature of the heuristics. There may also be a trade-off between computational complexity and correctness: is it better to rely on a slow deterministic algorithm for polynomial factorisation over a finite field, or on a fast probabilistic algorithm? \ Without striving for exhaustion, section a catalogue of different kinds of heuristic algorithms which occur in the area of symbolic computation. Each of the heuristic algorithms considered in this paper will fit into one of these categories. We will also describe some standard strategies in order to increase the level of confidence and examine typical examples for which heuristic algorithms may fail. A high confidence can typically be attached to algorithms which are correct modulo conjectures. One important example of this situation in computer algebra is zero testing of transcendental functions or constants. For example, consider the class of , built up from the rationals > using the operations,> exp andlog. The following conjecture is a major open problem in number theory: <\conjecture> Let ,\,\> be complex numbers which are linearly independent over the rational numbers >. Then the transcendence degree of ,\,\,exp|)>,\,exp|)>|)>:\> is at least . From the computer algebra point of view, the conjecture implies that all numerical relations between exp-log constants can be deduced from the usual rules =exp x*exp y> and =log x+log y>. Based on this fact, Richardson has given a zero test for exp-log constants with the following properties: (1) if the algorithm terminates on a given input, then the result is correct; (2) if the conjecture holds, then the algorithm always terminates; (3) if the algorithm does not terminate on a given input, then a counterexample to the conjecture can be constructed from this input. Similar situations occur in algorithmic number theory, depending on the correctness of Riemann's hypothesis. In this paper, the algorithms in sections and rely on zero tests for analytic functions. If we know that our analytic functions lie in special classes(such as the class of exp-log functions, or the class of differentially algebraic functions), then these zero tests might be reduced to suitable conjectures. Another frequent situation is that an algorithm relies on one of more other heuristic subalgorithms, with the property that, for agiven input, we obtain the right output whenever all calls of the subalgorithm(s) return the correct results. From a theoretical point of view, this corresponds to correctness modulo one or more oracles. For instance, in section, we will give heuristic algorithms for the computation of the radius of convergence of an analytic function. This will be a basic building block for many of the other algorithms in this paper. Nevertheless, the correctness of several of these other algorithms reduces to the correctness of this basic building block. If, for input functions in a more restricted class, we may determine their convergence radii with larger accuracy, then this will immediately make all the other algorithms more reliable. For instance, if we consider a class ( rational or algebraic functions) for which convergence radii can be computed exactly, then the algorithms in section are no longer heuristic. In computer algebra, there are plenty of algorithms which are correct unless some degeneracy occurs. For instance, solving a system of polynomial equations using numerical homotopy methods is correct unless the chosen paths accidentally hit a singularity. Moreover, if paths are chosen at random, then such accidents occur with probability zero. Similarly, in order to test whether a real analytic function is zero, it suffices to pick a random number and test whether > vanishes. In practice, there is no such thing as a random real number, since we can only represent numbers with finite precision. The above examples of heuristic algorithms which are correct ``with probability '' are really correct with probability |)>>, where is the working precision. There are many other examples of heuristic algorithms which are correct with high probability. For instance, in order to show that two polynomials in > have no common divisors, it suffices to pick a ``sufficiently random'' prime and show that the reductions of and in > have no common divisors. We also recall that there are two classical types of probabilistic algorithms. The examples mentioned above are only correct with high probability and fall into the class of so-called Monte Carlo algorithms. The second kind of Las Vegas algorithms are always correct, but only fast with high probability. For instance, all known fast algorithms for factoring polynomials over afinite field > are probabilistic Las Vegas type algorithms. We also would like to underline another aspect of heuristic tests, such as zero tests: it frequently occurs that only one direction of the test is difficult and based on heuristics, whereas the other direction is ``easy''. For instance, assume that we want to test whether an exp-log constant is zero. If the constant is non zero, then it is easy to prove this by evaluating the constant using interval arithmetic using asufficiently high precision and verify that zero is not in the interval enclosure of our constant. The difficult case, for which Richardson's algorithm relies on Schanuel's conjecture, is when the exp-log constant is zero and we want to prove this. Essentially all numerical algorithms are heuristic in the sense that they only compute approximate answers. Numerical analysts therefore have a lot of experience in dealing with heuristics and intentionally vague arguments. The very aim of numerical algorithms is often not to compute a mathematically sound result, but rather to design a method which ``works'' for solving a practical problem. This does not withstand that experts in this area have a great sense of how accurate and trustworthy various methods are. Ideally speaking, any numerical algorithm comes with adetailed error analysis. Most textbooks on numerical analysis start with asection on the various sources of error and how to take into account the machine accuracy1.1>. For more statistical algorithms, there is also a tradition of trying to quantify the level of trust that can be attached to the computed results. Finally, for many algorithms, it is possible to automate the error analysis by using interval arithmetic. Classical numerical algorithms operate at a fixed precision. From a theoretical point of view, it can be investigated whether a given numerical algorithm is at least ``ultimately correct'' in the sense that the computed output tends to the mathematically correct result if tends to infinity. If the numerical algorithm comes with arigourous error analysis, then we have a means for controlling the distance > between our approximations at a given precision and the ultimate limit. In particular, for any \0>, we may then select a precision with \\>. However, there are many situation in which we have no means for controlling the distance>, even though our algorithm is ultimately correct. For instance, the computation of the radius of convergence > of a differentially algebraic function is undecidable, even though there exists an algorithm for the computation of a sequence \\\\> with \> \=\>. Similarly, for a large class of functions, the formula() below gives an approximation for the radius of convergence of a series (when computing with a precision of bits), but the formula is only ultimately correct(). Since the outcome of a heuristic algorithm cannot entirely be trusted, it is natural to search for ways to at least increase the trustworthiness of the result. Depending on the kind of heuristics that were used, this can either be done by rerunning the same algorithm for different parameters, or by combining the algorithm with other algorithms. For instance, in numerical analysis, a frequent trick is to rerun the same algorithm for different working precisions and compare the results. Similarly, probabilistic algorithms of Monte Carlo type can be rerun several times in order to increase our confidence that we obtained the correct result. We also mentioned the fact that equality testing of exp-log constants is quite asymmetric in the sense that proving equalities is usually much harder than proving inequalities. This suggests the use of a mixed strategy for this problem, which combines aheuristic equality test with a deterministic, interval arithmetic based, inequality prover. In case of equality, we may optionally run Richardson's more expensive algorithm and try to prove the equality. In applications, we notice that heuristic algorithms are often used in a similar way: to speed up the main algorithm, which can be perfectly deterministic. A nice example is the computation of a Gröbner basis over the rationals using modular arithmetic. In Faugère's software, modular arithmetic is used in order to get a precise idea about those S-polynomials that reduce to zero, thereby avoiding many unnecessary reductions when performing the actual computations over the rationals. In the particular case of a heuristic algorithm whose purpose is to guess dependencies, we finally notice that we may always build in the trivial safeguard to check all proposed relations ; see also remark in section. It is a good practice to attack heuristic algorithms and try to construct systematic examples on which the heuristics fail. Successful attacks usually indicate that the algorithm was either wrong, or that the heuristics only make sense under certain additional assumptions. Failed attacks generally increase the confidence in the algorithm and sometimes lead into theoretical insight on why precisely the heuristics work well. One particular angle of attack concerns the computational complexity or the numerical stability. This means that we search for examples for which the heuristic algorithm is very slow or numerically unstable. This line of thought underlies much of the numerical experiments which are presented in section. Instead of attacking the heuristic algorithms, it is also possible to look for interesting classes of examples on which the heuristic provably works (or works better). In this paper, some interesting special classes of analytic functions are meromorphic, algebraic and so-called Fuchsian functions on a compact disk (see section). Typical classes which have to be considered for counterexamples are lacunary series, functions with a natural boundary, or holonomic functions that are not Fuchsian. Let be an analytic function which is given by its power series +f*z+f*z+\> at the origin. A natural problem is to compute the radius of convergence =\> of . For sufficiently large classes of analytic functions, such as solutions to algebraic differential equations over the rationals, this problem is generally undecidable, although efficient and high quality algorithms for the computation of lower bounds for > do exist in this case. In this section, we will consider heuristic algorithms, and only assume that a large number of coefficients ,f,\,f> of are known numerically. The first idea which comes into our mind is to apply Cauchy-Hadamard's formula for the radius of convergence>: <\equation*> \1>=limsup\> |\|>|i>. Given 2> coefficients ,\,f> of , we may for instance use the approximation <\equation> \1>\maxi\n> |\|>|i>. For most convergent power series , this formula is ultimately exact in the sense that <\equation> \1>=lim\> maxi\n> |\|>|i>. This is in particular the case when |\|>> is ultimately convex or ultimately concave (this follows from the fact that the sequence |\|>/|\|>> converges in an ultimately monotone way to>, in this case). The set of for which () holds is also stable under the transformation \f|)>> for any \>=\:n\0|}>>. Of course, we may replace by *n> in() for any \>. Notice however that the lacunary power series z>> does not satisfy(). The formula() has the disadvantage that it has not been scaled appropriately: when replacing by , where \>> is such that \1>, we obtain different approximations for > and >. This drawback can be removed by replacing > by /f> for some appropriate coefficient index k\n> with \0>. In fact, one can even do better, and compute > using the formula =/f|\|>>> for appropriate indices k\l\n> with \0> and \0>. Let us now show how to read off such indices from the numerical Newton diagram of. Let =|\|>|)>>, where we understand that \>. Then the of is the convex hull of the half lines <\eqnarray*> +\\>>|||\|>+y|)>:y\0|}>.>>>> For a fixed \\1>, say =1/2>, let |\*n|\>>, and consider the Newton diagram of *z+\+f*z>. There exists a minimal subset >,\,P>|}>\,\,P|}>> with \\\i=n-1>, such that the Newton diagram is also the convex hull of the half lines >+\\>> for j\k>. Graphically speaking, the >> are the vertices of the Newton diagram (see figure). For a fixed \,1|)>>, say =3/4>, we may now determine the unique edge >P>> of the Newton diagram such that \\*n\i>, and replace the formula() by <\equation> \1>\>|f>>|\|>-i>>. For a large class of convergent power series , the formula() is again ultimately exact. The indices \\\i> can be computed from the coefficients ,\,f> using a linear traversal in time >. This yields an efficient algorithm for the approximation of> which turns out to be more accurate than() in practice. The formula() has been implemented in the system. For 64>, the computed radius is usually correct up to one decimal digit in the relative error. <\big-figure||gr-mode||gr-frame|>|magnify|0.594603556059332|gr-line-width|5ln|gr-magnify|0.594603556782386|gr-dash-style|11100|gr-text-at-halign|center|gr-grid||1>|gr-grid-old||1>|gr-edit-grid-aspect|||>|gr-edit-grid||1>|gr-edit-grid-old||1>|gr-grid-aspect-props|||>|gr-grid-aspect||>|||||||||||||||||||||>>||>>|||||||||||||||||||||>>||>>|||>>|||>>||>>||>>||\|>>|>>||>>||>>>>> Illustration of the numerical Newton diagram for *log > and truncated at order . Taking =3/4>, we get =15>, =16> and \/f|\|>\1.08875>. The red line is the unique line which extends the edge P>, of slope /f|\|>\0.085>. The formula() usually yields a reasonable estimate for >, even in very degenerate cases when there are several singularities at distance > or close to >. However, if admits asingle isolated singularity > at distance > of the origin, with no other singularities at distance close to >, then it is often possible to read off much more information about this singularity from the coefficients ,f,\,f>. For instance, it frequently (always?) occurs that the quotients /f> simply tend to1>> for \>. Moreover, as we will show below, if the singularity at > has a known type, then the approximation 1>\f/f> can be further improved. If nothing is known about the singularity at>, then we may still try and see what happens if we apply such specialized algorithms for various frequent types of singularities: with a bit of luck, we will both obtain useful information about the type of singularity and the numerical value of>. We say that is at >, if satisfies a polynomial equation <\equation*> P*f+\+P=0, where ,\,P> are analytic functions at > with \0>. In that case, we have <\equation*> f\\n>*n*+a*n1/p>+a*n2/p>+\|)>, with \> and ramification index \>>, whence <\equation> log f\log \|)>*n+*log n+log a+|a>*n1/p>+\. Using the E-algorithm, we may now compute approximations for the first coefficients log \>, , >, /a>, of the expansion(). It turns out that this strategy greatly improves the accuracy of the approximation of > (see also). Similarly, we say that is at >, if satisfies a linear differential equation <\equation*> L*\ f+\+L*f=0, where =|)>*\/\ z> and ,\,L> are analytic functions at > with |)>\0>. In that case, the Taylor coefficients > satisfy the asymptotic expansion <\equation*> f\\n>*n>*+c*n1/p>+c*n2/p>+\|)>, where \\>, \>> and the > are polynomials in of degrees > d> . Again, the E-algorithm or more general algorithms for asymptotic extrapolation can be used to compute > with a high accuracy. Notice that these algorithms also provide estimates for the accuracies of the computed approximations. In cases where nothing particular is known about the behaviour of at its dominant singularity, the strategy of asymptotic extrapolation still may work and both provide useful information about the nature of the singularity and the numerical value of >. In order to explore the analytic function =f+f*z+f*z+\> determined by the coefficients ,f,f,\> more closely, and in particular determine its singularities in a given region, it is useful to have a means for performing the analytic continuation of . The way is given sometimes provides us with such a means. For instance, if is the solution to an initial value problem <\eqnarray*> >>||,\,f>|)>>>|,\,f>|)>>||>>> then the continuation =f> of at any sufficiently small is again the solution of an initial value problem <\eqnarray*> >>||,\,f>|)>>>|,\,f>|)>>||,\,f>|)>.>>>> In it is shown in detail how to turn this into an algorithm for the analytic continuation of . Notice also that the power series solutions to an initial value problem can be computed efficiently using the algorithms from. In general, it is always possible to compute the continuation > numerically. However, this kind of ``poor man's'' analytic continuation induces a big loss of accuracy. Let us illustrate this on the fundamental example <\eqnarray*> ||.>>>> In order to compute |)>> with a precision of bits using the formula <\eqnarray*> |)>>||>*f*u,>>>> we need to truncate this expansion at an order for which <\eqnarray*> *>|>|>|)>*2=*2.>>>> Putting *i> and using Stirling's formula, this yields <\eqnarray*> |)>*log|)>-\*log \+\*log u+log>|>|*log 2.>>>> For instance, in order to expand > at order with precision , we get <\eqnarray*> |)>*log|)>-\*log*\-|)>*log 2>|>|>|>|>|>>> and we need an expansion of at the origin at order +1|)>*n>. In general, for every analytic continuation step, we need to multiply the expansion order by a constant factor which depends on the desired precision and the ratio between the step size and the radius of convergence. An experimentally better strategy for poor man's analytic continuation is to postcompose with a suitable analytic function =\*z+\*z+\> instead of =z+u>. If admits a dominant singularity at >, then> should be chosen in such a way that \> has 1>|)>> as its only singularity in the unit disk, and such that 1>|)>|\|>> is as small as possible. One may typically take > to be a composition of rational functions of the form >. Yet another strategy would be to consider a Padé-Hermite approximation P/Q> of and then to expand (instead of ) at. Assuming that we have a means for analytic continuation (whether this means is efficient and accurate or only a ``poor man's'' solution), a natural question is to gather information about the singularities of . First of all, we may want to locate the dominant singularity (or singularities) of with a good precision. Let us first assume that admits a unique dominant singularity at > and no other singularities of norm |\|>+\>. Section provides us with heuristic algorithms for the computation of |\|>>. More generally, for any \> with \\>, the point -u> is also the dominant singularity of >, so we may compute -u|\|>> using the same heuristic algorithms. Computing -u|\|>> and -\*u|\|>> for two such points, the singularity > is now uniquely determined by its distances |\|>>, -u|\|>> and -\*u|\|>> to , and *u> (see figure). Even if we do not know > beforehand, then we apply the above method for with =\*|\|>>, for some \1>. In order to check the correctness of the computed value of> posteriori>, we also compute the radius of convergence > of /2>>. Then > is a dominant singularity of if and only if =\/2>. If this check fails, then we simply divide > by two and repeat the same procedure until the right value of > is found. In practice, the convergence radii can only be approximated, so all equality tests have to be replaced by approximate equality tests, for which the tolerance in relative precision is slightly larger than the estimated relative precision with which the convergence radii are computed. In particular, the above method computes > with approximately the same accuracy as the radius of convergence |\|>>. Furthermore, if our procedure for analytic continuation is accurate enough, then we may ``zoom in'' on a singularity, and determine its location with higher precision. More precisely, assume that we have a rough approximation|~>> of the dominant singularity >. For some small \0>, we then compute an approximation |~>> of the dominant singularity of *\>> with =1-\>. This yields the improved approximation *|~>+|~>> of >. Assume now that admits more than one dominant singularity (or other singularities of norms close to |\|>>), but still a finite number of them. On the one hand, by computing the radii of convergence of > for a sufficiently large number of points with |\|>>, we may determine all dominant singularities. On the other hand, for any two adjacent dominant singularities > and > with -\|\|>\1>, we may check the absence of other singularities in between by looking at the radii of convergence of *+2*\|)>/3>> and *+\|)>/3>> for some \\1>. This yields a criterion for deciding whether we took a ``sufficiently large number of points ''. From the numerical point of view, rough initial rough approximations can be further improved by zooming in. In figure we have shown an example where we zoomed in on one among eight singularities. After zooming in, we see that all other singularities are far away, so they do not pollute the computation of the radius of convergence. The generalized algorithm is particularly useful in the special case when there are two dominant singularities which are complex conjugates. This annoying situation prevents us from using the technique of asymptotic extrapolation in adirect way. Nevertheless, we may use it after zooming in on one of the two singularities. \; |||||| <\big-figure||gr-frame|>|gr-geometry||gr-grid||gr-grid-old||1>|gr-edit-grid-aspect|||>|gr-edit-grid||gr-edit-grid-old||1>|gr-grid-aspect|||>|gr-grid-aspect-props|||>|gr-dash-style|1111010|||>|||||||>>|||>>||>|>|>|*u|>>>> Determination of > from its distances to , and *u>. |<\cell> <\big-figure||gr-frame|>|gr-geometry||magnify|1.189207115|gr-arrow-end|\|gr-line-width|2ln|||>>|||>|||>>|||>||||||||||||||line-width|2ln|||>>>>> Zooming in on one among eight dominant singularities. >>>> In the case when our method for analytic continuation allows us to turn around singularities, it is natural to ask whether we can explore the Riemann surface of beyond the dominant singularities. In general, it is always possible to approximate the Riemann surface of by an organically growing sequence of more and more precise Riemann surfaces of aspecial form; we refer to for details. In this section, we will restrict ourselves to a more particular situation: given the closed disk |\>> of center and radius , we will assume that there exist a finite number of points ,\,\>\\> in its interior such that is defined above \|\>\,\,\|}>>. Our task is to find the points ,\,\>. One complication is that some of the singularities> may not directly be visible from the origin by following a straight line, but only after turning around one or more other singularities. In principle, more and more singularities might therefore be disclosed by making successive turns around the singularities which were already found. In order to ensure our algorithm to terminate, we will make the additional assumption <\description> There exists a finite dimensional vector space of analytic functions above >, which contains and all its analytic continuations. This is in particular the case if <\description> The Riemann surface of admits a finite number of sheets above >. In sections and, we will encounter two natural situations in which the assumptions are satisfied. Consider a finite number of points ,\,z\\> together with a graph which admits =,\,z|}>> as its vertices. We will say that is if is connected and for every edge ,z|)>\E>, the straightline segment between > and > lies in >. Assume that is known at one of the points >. For each \V>, we may then define > to be the vector space spanned by all analytic continuations of by following edges of . We may compute the collection ,\,V> of these vector spaces using the following algorithm: <\named-algorithm-old|continue> >,z,G|)>> an analytic function at > and an admissible graph with =,\,z|}>> for each \V>, a basis > of > <\algorithm-body> [Initialize] \>|}>> and =\> for all i>> <\algorithm-body> [Saturate] <\indent> For any \V>, B> and ,z|)>\E> do <\indent> If -z|)>>\span|)>>, then <\indent> Set \B\-z|)>>|}>> and repeat step 2 <\algorithm-body> [Terminate] <\indent> Return ,\,B|)>> <\remark> The test -z|)>>\span|)>> requires an equality test for analytic functions. From the heuristic point of view, we may simply fix two numbers \> and test whether the first Taylor coefficients coincide up to a relative error >. Assume now that singularities in ,\,\|}>> are known, say ,\,\>. We need atest in order to check whether , as well as a way to find at least one of the remaining singularities in ,\,\|}>> if s>. Let \0> be such that \-\|\|>/8> for all j\t> and |\|>\r-8*\>. We now consider a graph in the form of a hexagonal honeycomb, which fills up the disk |\>> and such that each edge ,z|)>\E> has length -z|\|>=\> (see also figure below). Picking sufficiently at random, the graph is also admissible, so we may apply the algorithm . For each vertex \V>, we also compute the minimum > of the convergence radii of the elements in >. We claim that if and only if <\eqnarray*> >||t>-\|\|>>>>> for all . Indeed, if , then the equality clearly holds. Assume for contradiction that the equality holds for all , even though s>. For each of the outermost vertices \V> of the honeycomb with |\|>\r-\>, we have -\|\|>\\\6*\>. This implies that |\|>\r-5*\>, whence > lies in the interior of one of the cells of the honeycomb. Let > with t> be such that -\|\|>> is minimal. Now consider a vertex \V> of the cell of the honeycomb which contains > such that -\|)>/-\|)>\1> (see figure). Then -\|\|>\-\|\|>>. Now -\|\|>\-\|\|>+-\|\|>\4*\> also implies that -\|\|>=\>. Consequently, \-\|\|>\\>. This contradiction completes the proof of our claim. Whenever () does hold for some , then this indicates the presence of a singularity in ,\,\|}>> near >. It thus suffices to determine the dominant singularities of the elements in > in order to find at least one of these missing singularities. <\remark> In practice, we only have heuristic algorithms for approximating the radii>. Therefore, and as in the previous subsection, the test() should really be replaced by an approximate equality test. <\remark> We picked a graph in the form of a honeycomb in order to simplify the proof of our claim. In practice, it is more efficient to construct graphs with small edges near the singularities and larger edges elsewhere, such as Voronoï diagrams. Following, it can also be shown that the optimal shape of a polygon for turning around a singularity contains edges and not (at least in the case when is the solution to a differentially algebraic equation). <\big-figure||gr-frame|>|gr-geometry||gr-fill-color|pastel blue||||>>||||||>||||||>||||||>||||||>||||||>||||||>||||||>||||||>||||||>||||||>||||||>||||||>||||>>|||>|j>|>|t+1>|>>>> Illustration of the proof of our claim that if and only if () holds for all . The colored region corresponds to the set of all points with |)>/|)>\1>. In the case when is meromorphic on the compact disk|\>> of radius , then there exists an alternative for the algorithms from section: we may directly search for a polynomial such that the radius of convergence of is strictly larger than. If such a polynomial=P> exists, then we may select to be monic and of minimal degree; this particular polynomial will be called the of on |\>>. If is not meromorphic on|\>>, then we define =\>. Given \>, we also define the > by =\> if =\> or \N> and =\> otherwise. Guarded denominators may be computed using simple linear algebra, as follows: <\named-algorithm-old|denom> > the first 2*D> coefficients of , a radius and a degree bound an approximation of a monic polynomial with \r>, > chosen of minimal degree D>, or > (failed) <\algorithm-body> [Initialize] <\indent> 0> [Determine ] <\indent> Solve the linear system <\equation> >|>|>>|>||>>|>|>|>>>>>*>>|>>|>>>>>+>>|>>|>>>>>=0 \; Set z+P*z+\+P> [Terminate or loop] <\indent> Heuristically determine \\>, based on the first coefficients of If \r> then return If then return > Set d+1> and go to step 2 In order to study the reliability of this algorithm, let us first introduce some more notations. First of all, given integers 0> and 2*d-1>, we will denote <\eqnarray*> >||>|>|>>|>||>>|>|>|>>>>>.>>>> The approximate correctness of the algorithm relies on the following technical lemma which will be proved in the next section. <\lemma> If admits exactly roots in the closed unit disk, when counted with multiplicities, then the matrix norm of > satisfies |H|\<\|\|\>>=N>> for \>. <\theorem> Let > and assume that the test \r> in the algorithm always returns true if and only if \> and deg Q>. Then > if and only if >. Moreover, if \>, then <\eqnarray*> |||)>>>>> for \> and any \\>. <\proof> Modulo a scaling >, we may assume without loss of generality that . Furthermore, the proof is clear if , so we will assume that 0> and \1>. The test \1> returns false as long as deg Q> (or if >). Assume now that reaches in the algorithm. Then the computed satisfies <\eqnarray*> >|>|>>|>||>>|>|>|>>>>>*-Q>>|>>|-Q>>>>>>||>>|>>|>>>>>,>>>> since <\eqnarray*> >|>|>>|>||>>|>|>|>>>>>*>>|>>|>>>>>+>>|>>|>>>>>>||>>|>>|>>>>>.>>>> Now let \\>. Given \,\|)>>, we have =O|)>|)>>. Using lemma, it follows that |H|\<\|\|\>>*|)>|)>=O>*|)>|)>=O|)>>. <\remark> The algorithm uses a very simple -step search for finding a polynomial of minimal degree with \r>. Using a binary search (doubling at each step at a first stage, and using a dichotomic search at a second stage), the number of steps can be reduced to >. > <\proof> Let us first consider the case when is a rational function with deg Q>>, =1>>, and such that only admits roots in the closed unit disk. We denote =\\:Q|)>=0|}>> and >=min\> Q>|)>\0> for each \\>. Using partial fraction decomposition, there exist numbers ,i>> with \>> such that <\eqnarray*> >||\\,i\\>>c,i>*N*\.>>>> In particular, we may factor <\eqnarray*> >||*\,>>>> where > is a matrix with entries in > and > is a diagonal matrix with entries > (and such that each \\> occurs >> times). The functions *\> with \>> being linearly independent, the matrix > is invertible in >, whence <\eqnarray*> |H|\<\|\|\>>>|||\*M|\<\|\|\>>=N>.>>>> This completes the proof of the lemma in the special case. In general, we may write >, with as above and where > is an analytic function on the closed unit disk. Then <\eqnarray*> |H|\<\|\|\>>>|||+H,N,d>|)>|\<\|\|\>>>>||||**H,N,d>|)>|)>|\<\|\|\>>>>||>||*H,N,d>|)>|\<\|\|\>>*|H|\<\|\|\>>>>||||)>*N>>>|||>,>>>> since <\eqnarray*> |H*H,N,d>|\<\|\|\>>>||>*r=o>>>> for some 1>. This completes the proof in the general case. Keeping the same spirit as in section, we will now turn our attention to two larger classes of algebraic and Fuchsian analytic functions on the compact disk |\>> of radius . Given afunction in one of these classes, we will show how to find the defining equation for, under the assumption that we have a high accuracy algorithm for its analytic continuation. As in remark, we will also assume the existence of a heuristic equality test for analytic functions. Let be an analytic function which is given by its power series +f*z+f*z+\> at the origin. Assume that can be continued analytically on a Riemann surface > above the closed disk |\>> of radius minus a finite set of points ,\,\>. Let > be the set of analytic functions on |\>>. We say that is on|\>> if there exists a polynomial relation <\equation> P*f+\+P=0, with \>. In that case, we may normalize the relation such that has minimal degree and such that > is a monic polynomial of minimal degree. In particular, all roots of > are inside the disk |\>>. Given a function which is algebraic on |\>>, this function clearly satisfies the assumption in section, whence we may compute the singularities ,\,\> using the algorithm described there. Conversely, the example <\equation*> f=\>>+\+\>> shows that there are functions which satisfy the assumption , but which are not algebraic on the disk |\>>. Assume that is algebraic on |\>>. For each singularity >, consider a path > from the origin to a point near > which avoids the other singularities, and let > be the operator which performs an analytic continuation along >, one turn around >, followed by an analytic continuation along 1>>. With these operators, we may use the following algorithm for the detection of algebraic dependencies: <\named-algorithm-old|alg_dep> ,\,\|)>,N,D,B|)>> an analytic function above |\>\,\,\|}>> and bounds , and a normalized algebraic dependency() with B> and \D>, or > (failed) <\algorithm-body> [Initialize] \>> <\algorithm-body> [Saturate] <\indent> If |)>\B> then return > If \\\> for all , then go to step 3 \\\C \\\\C \> Repeat step 2 <\algorithm-body> [Terminate] <\indent> Denote \,\,\|}>> Compute |)>*\*|)>=F+Q*F+\+Q> For each ,k|}>>, compute \,r,N,D|)>> If =\> for some , then return > lcm,\,D|)>*Q> If \D> then return > Return <\theorem> Modulo oracles for equality testing of analytic functions and the computation of guarded denominators and least common multiples, the algorithm > is correct. <\proof> Assume that satisfies a normalized relation(), with B> and \D>. Since > only contains distinct roots of , we have \\>|)>\|P> in |)>> throughout the algorithm. In particular |)>\d>, and we ultimately obtain stabilization \\\\C \\\>. At this point, analytic continuation around any of the points > leaves the polynomial invariant, so the coefficients of are analytic and single-valued on|\>\,\,\|}>>. On the other hand, given a singularity >, each solution \\> is also given by a convergent Puiseux series near >, whence so are the coefficients of . Since the only Puiseux series without monodromy around > are Laurent series, it follows that the coefficients of are meromorphic on |\>>. By the minimality assumption on, it follows that and *Q>. Since each coefficient =P/P> is meromorphic on |\>>, we may use the algorithm from the previous section in order to compute a polynomial > of minimal degree \deg P\D> such that *D\\>. The monic least common multiple ,\,D|)>> is nothing but the monic polynomial > of minimal degree such that *Q\\>. <\remark> In a world without oracles, the algorithm remains correct in the following sense: if the tests \\\> and =\> always return the right answer, then we will reach the line lcm,\,D|)>*Q> whenever the desired relation exists. For the final normalization step, we need a numerical algorithm for the computation of least common multiples which allows for small errors in the input coefficients. This is similar to the computation of approximate for which there exists an extensive literature; seefor example , or and references therein. <\remark> As an additional safeguard, we may heuristically compute the radii of convergence of ,\,P> and check whether they are indeed superior to . We say that is on|\>>, if satisfies a linear differential equation <\equation> L*f>+\+L*f=0, where \|]>>, and is Fuchsian at each point in |\>>. Again, we may normalize () such that has minimal order and such that > is a monic polynomial of minimal degree. Given a function which is Fuchsian on |\>>, the function now satisfies the assumption in section, so we may again compute the singularities ,\,\> using the algorithm described there. With ,\,C> as in the previous section, we may detect Fuchsian dependencies as follows: <\named-algorithm-old|fuch_dep> ,\,\|)>,N,D,B|)>> an analytic function above |\>\,\,\|}>> and bounds , and a normalized Fuchsian dependency() with B> and \D>, or > (failed) <\algorithm-body> [Initialize] \>> <\algorithm-body> [Saturate] <\indent> If |)>\B> then return > Let denote the vector space generated by a finite set of vectors If \|)>\Vect|)>> for all , then go to step 3 \\\ \|}>> for and \\> with \\Vect|)>> Repeat step 2 <\algorithm-body> [Terminate] <\indent> Denote \,\,\|}>> Compute lcm-\>,\,\-\>|)>=K*\+\+K> |)>|]>>, where >> denotes /\>> For each ,k|}>>, compute \,r,N,D|)>> If =\> for some , then return > lcm,\,D|)>*K> If \D> then return > Return <\theorem> Modulo oracles for equality testing of analytic functions and the computation of guarded denominators and least common multiples, the algorithm > is correct. <\proof> Assume that satisfies a normalized Fuchsian relation(), with B> and \D>. Throughout the algorithm, the set > only contains linearly independent solutions to =0>. Therefore, the smallest operator \\>-\>|)>> which vanishes on> divides in |)>|]>> on the right. In particular |)>\d>, and we ultimately obtain stabilization \+\+C \\\>. Consider one of the singularities >. Since is Fuchsian at >, the equation admits a fundamental system of solutions of the form <\equation*> \+u|)>=|)>*+\+\|)>|)>*u>, where \\> and ,\,\> are convergent power series. The coefficients of lie in the field > generated by all generalized power series of this form. Again, the only elements of> with a trivial monodromy around > are convergent Laurent series at >. Since analytic continuation around > leaves the operator unchanged, it follows that the coefficients of are meromorphic on |\>>. We conclude in a similar way as in the proof of theorem. <\remark> The hypothesis that admits at worse a Fuchsian singularity at > is essential for the algorithm to work. For instance, in the case of the function <\equation*> f=\>>+\>>, the monodromy around > is trivial, whence applying the algorithm for |\|>> would simply result in having => at step 3. Even though >> has no monodromy around >, this function is no longer a Laurent series. In fact, the desired vanishing operator has second order in this case, but it cannot be read off directly from >. In the case when we have no algorithm or no efficient algorithm for the analytic continuation of, the algorithms of section can no longer be used. In this section, we will describe apurely numerical approach for the detection of analytic relations on a closed disk of radius. Modulo the change of variables R*z>, it suffices to consider the case when . Given an order \>, we will denote by |]>> the set of power series \|]>> with =0> for all n>. When truncating the usual multiplication on |]>> at order , we give |]>> a ring structure, which is isomorphic to /|)>>. We will denote by |]>>> the Hilbert space of all power series \|]>> with finite > norm <\equation*> |f|\<\|\|\>>=|\|>+|\|>+\|>. Notice that any power series that converges on the closed unit disk has finite norm, but apower series with finite norm is only guaranteed to converge on the open unit disk. More generally, the norm of a vector ,\,f|)>\\|]>>> is given by <\equation*> |f|\<\|\|\>>=|f|\<\|\|\>>+\+|f|\<\|\|\>>>. The spaces |]>\\|]>\\> can be considered as an increasing sequence of Hilbert spaces, for the restrictions of the norm on |]>>> to the |]>>. Let r\1> and assume that ,\,f\\|]>> are power series with radii of convergence at least (notice that we do assume the > to be of finite norm). We want to solve the equation <\equation> g*f+\+g*f=h,\,g,h\\|]>>|)>. We will denote the vector space of all such relations ,\,\|)>=,\,g,h|)>\\|]>>> by >>. Since the equation() involves an infinite number of coefficients, we need to consider its truncated version at a finite order \>. Replacing ,\,f> by their truncations in |]>>, we will search for non-trivial solutions of the equation <\equation> g*f+\+g*f=h,\,g,h\\|]>|)>. In a way which will be made more precise below, we will require the norms of ,\,g|)>>> and to remain of the same orders of magnitude as the vector ,\,g|)>>. We will denote by > the vector space of all ,\,\|)>=,\,g,h|)>\\|]>> which satisfy(). If the norms of the solutions to the truncated problems tend to a limit for \>, then we will prove that these solutions tend to a solution of the original problem(). Let us reformulate the truncated problem in terms of linear algebra. The series ,\,f>> give rise to an > matrix <\equation*> M=>|||||||||||||||>|>||||||>|||||||||>|>|||||||||||||||>|>|>||||||||||||||>|>|>||||||||>||||||>|>|>||||||||||||||>|>|>|>|||||||||>||||>|>|>||>|||||||||>|||>|>|>|>|>|>|||||||||||>|>|>|||>||||||||||>|>|>|>|>|>|>|||||||||||>>>>. The unknown series ,\,g> give rise to a row vector <\equation*> G=>|>|>|>|>|>|>|>>>>>. Setting *f+\+g*f>, we then have <\equation*> G*M=>|>|>|>>>>. Putting the first entries on the right hand side and grouping by packets of entries, it follows that =G*M> encodes the relation ,\,\|)>=,\,g,h|)>>. This reduces the problem to finding those vectors for which |G*M|\<\|\|\>>> is of the same order of magnitude as the vector ,\,g|)>>. We start with the computation of a thin LQ decomposition of . This can for instance be done using the Gram-Schmidt process: starting with the first row, we orthogonally project each row on the vector space spanned by the previous rows. This results in adecomposition <\equation*> M=L*Q, where is a lower triangular n*d> matrix with ones on the diagonal and is an > matrix, whose rows are mutually orthogonal ( >=Id>). Now consider the matrix> formed by the last rows of 1>>. Then each row > gives rise to a relation(), encoded by =\*M>. Moreover, this relation is or -normal>, in the sense that =1> and =0> for all i>. Since *M> is the shortest vector in +Vect,\,\|)>|)>*M>, the relation is also minimal in norm, among all -normal relations. Choosing the row > for which |\*M|\<\|\|\>>> is minimal, our algorithm simply returns the corresponding relation. Then our algorithm has the following fundamental property: <\proposition> The algorithm returns a normal relation for which |G*M|\<\|\|\>>> is minimal. Let us now return to the case when ,\,f\\|]>> are no longer truncated, but all have aradius of convergence r>. A relation() is again said to be or -normal> if =1> and =0> for all i>. Under the limit \>, we claim that our algorithm finds a minimal normal relation, if there exists a relation of the form(): <\theorem> <\enumerate-alpha> Assume that >\0>. Then >> contains aminimal normal relation. For each , >> contains at most one minimal -normal relation. Assume that \\>> is a minimal -normal relation. For each \>, let > be the truncation of at order and consider the corresponding minimal -normal relation\\>. Then the relations > converge to > in |]>>>. If >=>, then the norms |\|\<\|\|\>>>, with > as in )>, are unbounded for \>. <\proof> A non trivial relation => is easily normalized: we first divide by >, where ,\,val g|}>>. We next divide by >, where is largest with \0>. Now the set of all -normal relations is a closed affine subspace of |]>>>. The orthogonal projection of on this subspace yields a unique -normal relation of minimal norm. This proves(). Assume that there exists an -normal relation )>, and let \\>> be the minimal such relation. Given an order \>, consider the minimal -normal relation > at this order. Truncation of this relation at asmaller order n> yields an -normal relation > at order with \-\|)>>, whence <\equation> |\|\<\|\|\>>=|\|\<\|\|\>>+|\-\|\<\|\|\>>. Moreover, since > is the projection of on the affine space of -normal relations at order, we have \-\|)>> and <\equation> |\|\<\|\|\>>=|\-\|\<\|\|\>>+|\|\<\|\|\>>. Similarly, truncation of the relation > at finite order yields an -normal relation ;n>> which satisfies <\equation*> |\|\<\|\|\>>=|\;n>|\<\|\|\>>+|\-\|\<\|\|\>>. In particular, |\|\<\|\|\>>\|\|\<\|\|\>>\|\|\<\|\|\>>\|\|\<\|\|\>>>, so that <\equation*> |\|\<\|\|\>>\|\|\<\|\|\>>\\\|\|\<\|\|\>>. For m\\>, it follows that |\|\<\|\|\>>-|\|\<\|\|\>>\0>, whence <\equation*> |\-\|\<\|\|\>>+|\-\|\<\|\|\>>=|\|\<\|\|\>>-|\|\<\|\|\>>\0, and <\equation*> |\-\|\<\|\|\>>\|\-\|\<\|\|\>>+|\-\|\<\|\|\>>\0. In other words, > is aCauchy sequence, which converges to a limit |~>\\|]>>> with ||~>|\<\|\|\>>\|\|\<\|\|\>>>. Now for each , we may write =,\,g,h|)>> and denote by> the truncation of the vector =,\,f,-1|)>> at order . By assumption, the inner products \\> and \\> vanish. Since |\-|)>|\<\|\|\>>\0> for all and \>, we also have |\\\|\<\|\|\>>\0> for \>, whence |~>\\=0>. Consequently, |~>> is an -normal relation in>>. By the minimality hypothesis on >, we conclude that |~>=\>, which proves(). In general, the existence of a bound with <\equation*> |\|\<\|\|\>>\|\|\<\|\|\>>\\\B still ensures > to be a Cauchy sequence, and its limit yields an -normal relation(). This proves the last assertion (). <\remark> For some very specific types of singularities, we would like to mention the existence of an alternative strategy for the determination of linear dependencies. This strategy relies on asymptotic extrapolation and does not involve analytic continuations around singularities, as in section. Let us illustrate the method on the example of afunction with an isolated smallest singularity at of the form <\equation*> f=a+b*log , where and are analytic at . Then the asymptotic extrapolation of +f*z+\> yields an asymptotic expansion of the form <\equation*> f\|n>+|n>+\. Using singularity analysis, we may then recover the function from the coefficients ,c,\>. However, this technique only works in special cases, since the first > terms of the asymptotic expansion may hide other terms, which need to be taken into account when searching for exact dependencies. A first implementation of the guessing algorithm from section has been made in the system. It is instructive to test the algorithm on examples for which the existence or absence of linear dependencies is known. In particular, in order to turn theorem into an algorithm which requires no manual tweaking, we need more precise information about the corresponding rates of convergence and divergence of the norms|\|\<\|\|\>>>. We have tested the orthogonalization algorithm on three examples of increasing difficulty, for which linear dependencies are known to exist. In order to avoid problems due to numerical instability, we have used a large working precision, of 1024 or 2048 bits. The easiest example on which we can run the algorithm is <\eqnarray*> >||*z>\1|)>.>>>> Here follow the values for > at different orders and =2>: <\flat-size> <\eqnarray*> >||-0.20000*z>>|>||-\-5.8340*10*z-1.9447*106>*z>>|>||-\-5.0484*10*z-1.6828*1026>*z>>|>||-\-2.8307*10*z-9.4357*10107>*z>>>> The > clearly converge to a limit > with radius of convergence 1>. It should be noticed that we do not have =1-\*z>. In other words, the ``best'' numeric relation (from the point of view of >-norms) does not coincide with the ``best'' algebraic relation (as found by using Padé-Hermite approximation). A closer examination of the result showsthat <\eqnarray*> |>||*z|1-\*z>>>|+>>||+>\1|)>>>>> In particular, > decreases if > increases. Our second example concerns a logarithmic singularity: <\eqnarray*> >||*log>>|>||>>>> We obtain: <\flat-size> <\eqnarray*> |>||-\-0.010473*z+0.0033638*z>>|>||-\-0.0083494*z+0.0012131*z>>|>||-\-2.3908\104>*z+1.6114\105>*z>>|>||10+1.00000*z-1.6204*z-0.24832*z-0.75949*z-0.31293*z-\>>|||10*z-9.5406\10*z-9.3630\10*z-9.3108\10*z-\>>|||10*z+2.7475\10*z-1.9342\108>*z+6.2727\1010>*z>>|>||-\-0.0071095*z+0.0033638*z>>|>||-\-0.0071362*z+0.0012131*z>>|>||-\-2.2297\104>*z+1.6114\105>*z>>|>||-0.21949*z-0.25182*z-0.016358*z-\>>|||10*z-9.8597\10*z-8.6828\10*z-7.3899\10*z-\>>|||10*z+2.5583\10*z-1.8714\108>*z+6.2727\1010>*z>>>> The convergence only becomes apparent at higher orders. Contrary to the previous example, it seems that the series in the computed relation all have a radius of convergence . The norms of the computed results are given by |\|\<\|\|\>>=3.5577>, |\|\<\|\|\>>=3.5955>, |\|\<\|\|\>>=3.6739> and |\|\<\|\|\>>=3.6862>. A more interesting example from combinatorics is the enumeration of the number of alcohols of the form HO H> . Its generating series satisfies the functional equation <\equation*> s=1+z*+2*s|)>|3>. The asymptotic behaviour of the coefficients > is determined by the asymptotic behaviour of > at its singularity of smallest norm. Since \0> for all , this so-called dominant singularity necessarily lies on the positive real axis, and coincides with the radius of convergence of . Using asymptotic extrapolation, we find that 0.304218409>. In order to investigate > for close to , we apply our algorithm to <\eqnarray*> >|||)>>>|>||.>>>> The translation z+0.25> is done using power series evaluation until stabilization at the working precision. Although this was most straightforward to implement for this particular example, there exist better strategies for zooming in, as mentioned at the end of section. At different orders, we obtain: <\flat-size> <\eqnarray*> |>||-\+0.022645*z-0.0029284*z>>|>||-\+1.6968\10*z-1.0104\105>*z>>|>||-\+9.4113\10*z-3.6428\108>*z>>|>||-0.052535*z+0.033518*z-\>>|||10*z-3.4222\10*z-1.2120\10*z-1.5462\10*z-\>>|||10*z-7.4945\10*z+4.6860\10*z-1.3437\1010>*z>>|>||-\+0.0061880*z-9.9191\104>*z>>|>||-\+5.2359\10*z-3.4225\106>*z>>|>||-\+3.0034\10*z-1.2339\108>*z>>|>||-0.0067226*z-0.080434*z+\>>|||10*z-3.4490\10*z-2.9260\10*z-1.1090\10*z+\>>|||10*z-2.3161\10*z+1.5192\10*z-4.5516\1011>*z>>>> The corresponding norms are given by |\|\<\|\|\>>=4.3276>, |\|\<\|\|\>>=4.3845>, |\|\<\|\|\>>=4.3863> and |\|\<\|\|\>>=4.3863>. Again, the convergence is rather slow, and the computed series all seem to have radius of convergence . Since /0.15>, this means that we indeed guessed an analytic relation between > and > on the unit disk. <\remark> The last two examples show that our algorithm usually returns a relation whose series all have radius of convergence , if such a relation indeed exists. Larger radii of convergence can be obtained simply by running the algorithm on > and scaling back the result. It is also instructive to run the algorithm on examples where the > are known to be analytically independent. In that case, it is important to examine the behaviour of the norms |\|\<\|\|\>>> for \> and find more precise criteria which will enable us to discard the existence of a nice dependency with a reasonable degree of certainty. Let us return to the case of a simple logarithmic singularity <\equation*> f=log*z|)>, where \1>. Running our algorithm directly on this function yields the following results for =2>: <\eqnarray*> >||-\+1.3621*z-0.68197*z>>|>||-\+4.1515*z-0.67813*z>>|>||-\+9.2483*z-0.65589*z>>|>||-\+19.957*z-0.66367*z>>>> The results indeed do not converge and the corresponding sequence of norms |\|\<\|\|\>>=12.324>, |\|\<\|\|\>>=77.306>, |\|\<\|\|\>>=3383.9> and |\|\<\|\|\>>=6.4461\10> diverges at moderate exponential speed. In the above example, it is interesting to study the rate of divergence of the sequence of norms for other values of >. More generally, we can consider various types of singularities, such as <\eqnarray*> >>||*z|)>>>|>>||*z>>>|>>||*z|1-\*z>>>>|>>||*z|)>.>>>> The series =\+\*z+\*z+\> is a series whose coefficients are chosen according to a random uniform distribution on >. The results are shown in table below. For >>, >> and >>, it seems that the norm does not much depend on the precise type of singularity, but only on > and the truncation order . For the last series >>, the dependencies on > and approximately seem to follow the law <\equation> log |\|\<\|\|\>>\*log \. This idealized law needs to be adjusted for functions >>, >>, >> with more common types of singularities. Although () remains asymptotically valid for large values of >, the factor > needs to be replaced by a smaller number for moderate values. For \\\4>, this factor rather seems to be of the form *|)>>, where the constant > depends on the nature of the singularity. Also, the linear dependency of |\|\<\|\|\>>> on is only reached for large values of . <\big-table||||||||>|>>>>|>|>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>>>||||10>>>|>>|>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>>>|>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>|10>>>>>>> Computed values of |\|\<\|\|\>>> for various types of singular behaviour >> and orders . 1>>We have also applied our algorithm jointly to several>> as above and studied the dependency of |\|\<\|\|\>>> on . The results are shown in table below. In the bottom rows, >>,\>\>,\> stand for distinct uncorrelated random series *z|)>>. In that case, the relation () generalizes to <\equation> log |\|\<\|\|\>>\*log \. It also seems that the law can be adapted to functions with more common types of singularities, along similar lines as before. It would be interesting to have a theoretical explanation for the empirical laws() and(). The division by in() is probably due to the fact that there are unknowns in the equation(). Of course, in the case that we are considering functions ,\,f> with several singularities ,\,\>> on our disk of interest, only the singularities> for which|\|>> is smallest are ``numerically visible'' (except when there exist relations which not involve these dominant singularities >). <\big-table||||||||>|>>|10>>|10>>|10>>>|,\>>|10>>|10>>|10>>>|,\,\z|)>>>|10>>|10>>|10>>>|,\,\z|)>,\z|)>>>|10>>|10>>|10>>>|>>|10>>|10>>|10>>>|,\>>>|10>>|10>>|10>>>|,\>,\\>>>|10>>|10>>|10>>>|,\>,\\>,\\\>>>|10>>|10>>|10>>>>>>> Computed values of |\|\<\|\|\>>> for different input vectors of increasing dimensions . In order to design a blackbox algorithm for the detection of dependencies(), we need to make a final decision on whether we declare the input functions to be dependent or not. In view of the empirical law(), and whatever decision criterion we implement, the outcome can only be trusted if the following two conditions are satisfied: <\enumerate> The number should not be too large. There exists a large constant \1> for which all singularities of are concentrated inside the disk of radius 1>> or outside the disk of radius >. As usual, if we are interested in dependencies near an isolated singularity, then the second condition can often be achieved by zooming in on the singularity. Our empirical observations in the previous section suggest that we should base the final decision on the empirical law(). First of all, in the case of independence, we should observe a more or less geometric increase of the norm |\|\<\|\|\>>> when running the algorithm for different orders. If only admits an isolated singularity in |\>>, then we may also compute its norm 1>> and compare the computed values of |\|\<\|\|\>>> with the expected values, as given by the law(), or a suitable adaptation of this law when > becomes small. This method is most effective when > is large, so it is usually recommended to zoom in on the singularity before doing this comparison. In fact, if the law() is satisfied for zooming factors, then this may further increase our confidence that the input functions are independent. For any numerical checks based on the law() or a refinement of it, we also recommend to precondition the input series ,\,f>. In particular, we recommend to multiply each> by a suitable constant, ensuring that <\equation*> maxn>|\|>*r=1, whenever we apply our algorithm at order . Here is computed using one of the algorithms from section. In fact, there is a trade-off between zooming in on the dominant singularity (thereby ensuring a large value of >) and using the original input coefficients (which are given with a higher accuracy and order). In order to get more insight in how far we should zoom in, we need to analyze the cost of the algorithm from section. The current working precision should in general be taken larger than \> in order to keep the method numerically stable. Denoting by > the cost for multiplying twobit numbers, a naive implementation of the Gram-Schmidt orthogonalization procedure yields a total cost of *n*|)>>. Denoting by > the cost of multiplying two k> matrices with bit entries, and using a blockwise Gram-Schmidt procedure (similar to the procedure for matrix inversion in), we obtain the better bound p|)>|)>>. However, the matrix from section has a very special form. With more work, it might therefore be possible to save an additional factor >, but we have not actively tried to do so yet. Taking into account the effect of a possible zoom, we may next evaluate the computational cost in terms of the desired ``output quality''. As a definition of the output quality, we may take the expected value *log \> of |\|\<\|\|\>>> in the case when ,\,f> are independent. In order to obtain accurate results, we need to compute with a bit precision larger than . In terms of , the time complexity thus becomes <\equation*> O|log \>,q*d|)>|)>=+1>*d+1>||)>>>|)>=*d||)>>|)>, where \2.373\3> is the exponent of matrix multiplication. The complexity bound makes it clear that we should put a lot of effort into keeping > large. If satisfies adifferential equation, then zooming in can be done with low computational cost (see section). Otherwise, the required expansion order grows quickly with the inverse of the distance to the singularity. Indeed, for 1>, the formula() becomes <\eqnarray*> >|>|*log 2-log>>||>|*log 2-log \,>>>> where we recall that, in order to apply the orthogonalization process after zooming in, we need to expand at an order which is > times larger than before. <\bibliography|bib|alpha|all.bib> <\bib-list|CGTW95> D.Bailey, J.M. Borwein, V.Kapoor, and E.W. Weisstein. Ten problems in experimental mathematics. , 113(6):481--509, 2006. J.M. Borwein and K.J. Devlin. . A.K. Peters Series. A.K. Peters, 2009. K.Belabas. , volume2 of , chapter Théorie algébrique des nombres et calcul formel. CEDRAM, 2011. Exp. No. 1, 40 pages, . G.D. Birkhoff. Equivalent singular points of ordinary differential equations. , 74:134--139, 1913. R.P. Brent and H.T. Kung. Fast algorithms for manipulating formal power series. , 25:581--595, 1978. B.Beckermann and G.Labahn. A uniform approach for the fast computation of matrix-type Padé approximants. , 15:804--823, 1994. C.Brezinski and R.Zaglia. . North-Holland, Amsterdam, 1991. D.V. Chudnovsky and G.V. Chudnovsky. Computer algebra in the service of mathematical physics and number theory (computers in mathematics, stanford, ca, 1986). In , volume 125, pages 109--232, New-York, 1990. Dekker. R.M. Corless, P.M. Gianni, B.M. Trager, and S.M. Watt. The singular value decomposition for polynomial systems. In , pages 195--207. ACM Press, 1995. B.F. Caviness and M.J. Prelle. A note on algebraic independence of logarithmic and exponential constants. , 12/2:18--20, 1978. D.Coppersmith and S.Winograd. Matrix multiplication via arithmetic progressions. In > Annual Symposium on Theory of Computing>, pages 1--6, New York City, may 25--27 1987. H.Derksen. An algorithm to compute generalized Padé-Hermite forms. Technical Report Rep. 9403, Catholic University Nijmegen, January 1994. J.Denef and L.Lipshitz. Decision problems for differential equations. , 54(3):941--950, 1989. E.Fabry. . PhD thesis, Paris, 1885. J.-C. Faugère. Parallelization of Gröbner basis. In Hoon Hong, editor, , volume5 of . Pasco'94, World Scientific, 1994. Ph. Flajolet and R.Sedgewick. . Addison Wesley, Reading, Massachusetts, 1996. J.vonzur Gathen and J.Gerhard. . Cambridge University Press, 2-nd edition, 2002. J.vander Hoeven, G.Lecerf, B.Mourrain, etal. Mathemagix, 2002. . J.vander Hoeven. Fast evaluation of holonomic functions. , 210:199--215, 1999. J.vander Hoeven. Zero-testing, witness conjectures and differential diophantine approximation. Technical Report 2001-62, Prépublications d'Orsay, 2001. J.vander Hoeven. Relax, but don't be too lazy. , 34:479--542, 2002. J.vander Hoeven. Effective complex analysis. , 39:433--449, 2005. J.vander Hoeven. On effective analytic continuation. , 1(1):111--175, 2007. J.vander Hoeven. On asymptotic extrapolation. , 44(8):1000--1016, 2009. J.vander Hoeven. , volume2 of , chapter Calcul analytique. CEDRAM, 2011. Exp. No. 4, 85 pages, . N.K. Karmarkar and Y.N. Lakshman. Approximate polynomial greatest common divisors and nearest singular polynomials. In Y.N. Lakhsman, editor, , pages 35--42, Zürich, Switzerland, July 1996. ACM Press. S.Lang. Transcendental numbers and diophantine approximation. , 77/5:635--677, 1971. A.K. Lenstra, H.W. Lenstra, and L.Lovász. Factoring polynomials with rational coefficients. , 261:515--534, 1982. R.E. Moore. . Prentice Hall, Englewood Cliffs, N.J., 1966. V.Pan. , volume 179 of Springer, 1984. H.Poincaré. Sur les intégrales irrégulières des équations linéaires. , 8:295--344, 1886. G.Pólya. Kombinatorische Anzahlbestimmungen für Gruppen, Graphen und chemische Verbindungen. , 68:145--254, 1937. W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. . Cambridge University Press, 3rd edition, 2007. D.Richardson. How to recognise zero. , 24:627--645, 1997. D.Richardson. Zero tests for constants in simple scientific computation. , 1(1):21--37, 2007. V.Strassen. Gaussian elimination is not optimal. , 13:352--356, 1969. A.J. Sommese and C.W. Wampler. . World Scientific, 2005. B.Salvy and P.Zimmermann. Gfun: a Maple package for the manipulation of generating and holonomic functions in one variable. , 20(2):163--177, 1994. V.Vassilevska Williams. Multiplying matrices faster than Coppersmith-Winograd. In > ACM Symp. on Theory of Computing (STOC'12)>, 2012. To appear. Preliminary version available at . J.Verschelde. . PhD thesis, Katholieke Universiteit Leuven, 1996. W.Wasow. . Dover, New York, 1967. K.Weihrauch. . Springer-Verlag, Berlin/Heidelberg, 2000. E.J. Weniger. Nonlinear sequence transformations: Computational tools for the acceleration of convergence and the summation of divergent series. Technical Report math.CA/0107080, Arxiv, 2001. H.S. Wilf. . Academic Press, 2nd edition, 1994. . Z.Zeng. The approximate gcd of inexact polynomials Part I: a univariate algorithm. , 2004. <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Pol37 Wilf04 FS96 BBKW06 BD09 LLL82 BL94 Der94 SZ94 vdH:extrapolate Pol37 FS96 Lang71 CP78 Rich97 Rich07 Bel11 vdH:witness Ver:phd SW05 GaGe02 Moo66 Rich07 PTVF07 Moo66 Wei00 vdH:jncf DL89 vdH:effan vdH:riemann Faug94 DL89 vdH:riemann vdH:riemann vdH:mmx Wen01 BZ91 vdH:extrapolate Fab1885 Poin1886 Birk13 Was67 vdH:extrapolate vdH:extrapolate vdH:effan vdH:riemann BK78 vdH:relax vdH:riemann CC90 vdH:hol CGTW95 KL96 Zeng04 vdH:extrapolate FS96 vdH:mmx Pol37 vdH:extrapolate Str69 Str69 Pan84 WC87 VW12 <\associate|figure> <\tuple|normal> Illustration of the numerical Newton diagram for |f=*log > and truncated at order |n=20>. Taking |\=3/4>, we get |i=15>, |i=16> and |\\/f|\|>\1.08875>. The red line is the unique line which extends the edge |PP>, of slope |log /f|\|>\0.085>. > |\> from its distances to |0>, |u> and |\*u>.|> > <\tuple|normal> Illustration of the proof of our claim that |t=s> if and only if () holds for all |i>. The colored region corresponds to the set of all points |z> with |Re |)>/|)>\1>. > <\associate|table> <\tuple|normal> Computed values of ||\|\<\|\|\>>> for various types of singular behaviour |\>> and orders |n>. > <\tuple|normal> Computed values of ||\|\<\|\|\>>> for different input vectors of increasing dimensions |d>. > <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |1.1.General background and results |.>>>>|> > |1.2.Overview of the paper |.>>>>|> > |math-font-series||font-shape||2.On the use of heuristic algorithms> |.>>>>|> |2.1.Various kinds of heuristic algorithms |.>>>>|> > |Correct modulo conjectures |.>>>>|> > |Reduction to other heuristic algorithms |.>>>>|> > |Correctness with probability one |.>>>>|> > |Correctness with high probability |.>>>>|> > |Asymmetric correctness |.>>>>|> > |Numerical algorithms |.>>>>|> > |Ultimate correctness |.>>>>|> > |2.2.Further considerations |.>>>>|> > |Increasing the trustworthiness |.>>>>|> > |Attacks |.>>>>|> > |math-font-series||font-shape||3.Computing the radius of convergence> |.>>>>|> |3.1.Rough approximation using the Newton polygon |.>>>>|> > |3.2.Asymptotic extrapolation |.>>>>|> > |math-font-series||font-shape||4.Locating singularities of analytic functions> |.>>>>|> |4.1.Analytic continuation |.>>>>|> > |4.2.Locating dominant singularities |.>>>>|> > |4.3.Exploring the Riemann surface |.>>>>|> > |math-font-series||font-shape||5.Meromorphic functions> |.>>>>|> |5.1.Computing the denominator on a compact disk |.>>>>|> > |5.2.Proof of lemma |->|-0.3em|>|0em||0em|>> |.>>>>|> > |math-font-series||font-shape||6.Detecting dependencies via analytic continuation> |.>>>>|> |6.1.Algebraic dependencies on a compact disk |.>>>>|> > |6.2.Fuchsian dependencies on a compact disk |.>>>>|> > |math-font-series||font-shape||7.Detecting dependencies via orthogonalization> |.>>>>|> |math-font-series||font-shape||8.Numerical experiments> |.>>>>|> |8.1.Examples of recognized relations |.>>>>|> > |Single poles |.>>>>|> > |Logarithmic singularity |.>>>>|> > |Number of alcohols |.>>>>|> > |8.2.Rate of divergence in case of independence |.>>>>|> > |Logarithmic singularity |.>>>>|> > |Various singularities |.>>>>|> > |Various singularities and |d\1> |.>>>>|> > |8.3.Towards criteria for an automatic final decision |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|>