Around the numeric-symbolic computation

of differential Galois groups

Joris van der Hoeven

Dépt. de Mathématiques (bât. 425)

Université Paris-Sud

91405 Orsay Cedex

France

Email: vdhoeven@texmacs.org

July 3, 2006

Let L𝕂(z)[] be a linear differential operator, where 𝕂 is an effective algebraically closed subfield of . It can be shown that the differential Galois group of L is generated (as a closed algebraic group) by a finite number of monodromy matrices, Stokes matrices and matrices in local exponential groups. Moreover, there exist fast algorithms for the approximation of the entries of these matrices.

In this paper, we present a numeric-symbolic algorithm for the computation of the closed algebraic subgroup generated by a finite number of invertible matrices. Using the above results, this yields an algorithm for the computation of differential Galois groups, when computing with a sufficient precision.

Even though there is no straightforward way to find a “sufficient precision” for guaranteeing the correctness of the end-result, it is often possible to check a posteriori whether the end-result is correct. In particular, we present a non-heuristic algorithm for the factorization of linear differential operators.

1.Introduction

Let L𝕂(z)[] be a monic linear differential operator of order n, where 𝕂 is an effective algebraically closed subfield of . A holonomic function is a solution to the equation Lf=0. The differential Galois group 𝒢 of L is a linear algebraic group which acts on the space H of solutions (see section 2.2 and [Kap57, vdPS03, Kol73]). It carries a lot of information about the solutions in H and on the relations between different solutions. For instance, the existence of non-trivial factorizations of L and the existence of Liouvillian solutions can be read off from the Galois group. This makes it an interesting problem to explicitly compute the Galois group of L.

A classical approach in this area is to let 𝒢 act on other vector spaces obtained from H by the constructions from linear algebra, such as symmetric powers kH and exterior powers kH [Bek94, SU93]. For a suitable such space S, the Galois group 𝒢 consists precisely of those invertible n×n matrices which leave a certain one-dimensional subspace of S invariant [Hum81, chapter 11]. Invariants in kH or kH under 𝒢 may be computed more efficiently by considering the local solutions of Lf=0 at singularities [vHW97, vH97, vH96]. More recently, and assuming (for instance) that the coefficients of L are actually in (z), alternative algorithms appeared which are based on the reduction of the equation Lf=0 modulo a prime number p [Clu04, vdP95, vdPS03].

In this paper, we will study another type of “analytic modular” algorithms, by studying the operator L in greater detail near its singularities using the theory of accelero-summation [É85, É87, É92, É93, Bra91, Bra92]. More precisely, we will use the following facts:

When using these facts for the computation of differential Galois groups, the bulk of the computations is reduced to linear algebra in dimension n with multiple precision coefficients.

In comparison with previous methods, this approach is expected to be much faster than algorithms which rely on the use of exterior powers. A detailed comparison with arithmetic modular methods would be interesting. One advantage of arithmetic methods is that they are easier to implement in existing systems. On the other hand, our analytic approach relies on linear algebra in dimension n (with floating coefficients), whereas modulo p methods rely on linear algebra in dimension np (with coefficients modulo p), so the first approach might be a bit faster. Another advantage of the analytic approach is that it is more easily adapted to coefficients fields 𝕂 with transcendental constants.

Let us outline the structure of this paper. In section 2, we start by recalling some standard terminology and we shortly review the theorems on which our algorithms rely. We start with a survey of differential Galois theory, monodromy and local exponential groups. We next recall some basic definitions and theorems from the theory of accelero-summation and the link with Stokes matrices and differential Galois groups. We finally recall some theorems about the effective approximation of the transcendental numbers involved in the whole process.

Before coming to the computation of differential Galois groups, we first consider the simpler problem of factoring L in section 3. We recall that there exists a non-trivial factorization of L if and only if the Galois group of L admits a non-trivial invariant subspace. By using computations with limited precision, we show how to use this criterion in order to compute candidate factorizations or a proof that there exist no factorizations. It is easy to check a posteriori whether a candidate factorization is correct, so we obtain a factorization algorithm by increasing the precision until we obtain a correct candidate or a proof that there are no factorizations.

In section 4 we consider the problem of computing the differential Galois group of L. Using the results from section 2, it suffices to show how to compute the algebraic closure of a matrix group 𝒢 generated by a finite number of given elements. A theoretical solution for this problem based on Gröbner basis techniques has been given in [DJK03]. The main idea behind the present algorithm is similar, but more emphasis is put on efficiency (in contrast to generality).

First of all, in our context of complex numbers with arbitrary precisions, we may use the LLL-algorithm for the computation of linear and multiplicative dependencies [LLL82]. Secondly, the connected component of 𝒢 is represented as the exponential of a Lie algebra given by a basis. Computations with such Lie algebras essentially boil down to linear algebra. Finally, we use classical techniques for finite groups in order to represent and compute with the elements in 𝒢/e [Sim70, Sim71, MO95]. Moreover, we will present an algorithm for non-commutative lattice reduction, similar to the LLL-algorithm, for the efficient computation with elements in 𝒢/e near the identity.

The algorithms in section 4 are all done using a fixed precision. Although we do prove that we really compute the Galois group when using a sufficiently large precision, it is not clear a priori how to find such a “sufficient precision”. Nevertheless, we have already seen in section 3 that it is often possible to check the correctness of the result a posteriori, especially when we are not interested in the Galois group 𝒢 itself, but only in some information provided by 𝒢. Also, it might be possible to reduce the amount of dependence on “transcendental arguments” in the algorithm modulo a further development of our ideas. Some hints are given in the last section.

Remark 1. The author first suggested the main approach behind this paper during his visit at the MSRI in 1998. The outline of the algorithm in section 4.5 came up in a discussion with Harm Derksen (see also [DJK03]). The little interest manifested by specialists in effective differential Galois theory for this approach is probably due to the fact that current computer algebra systems have very poor support for analytic computations. We hope that the present article will convince people to put more effort in the implementation of such algorithms. We started such an effort [vdHea05], but any help would be appreciated. Currently, none of the algorithms presented in this paper has been implemented.

2.Preliminaries

2.1.Notations

Throughout this paper, we will use the following notations:

𝕂
An algebraically closed field of constants.
Matn(𝕂)
The algebra of n×n matrices with coefficients in 𝕂.
GLn(𝕂)
The subgroup of Matn(𝕂) of invertible matrices.
Vect(S)
The vector space generated by a subset S of a larger vector space.
Alg(S)
The algebra generated by a subset S of a larger algebra.

Vectors are typeset in bold face 𝒗=(v1,,vn) and we use the following vector notations:

𝒗𝒘 = v1w1++vnwn 𝒗𝒌 = v1k1vnkn

Matrices MMatn(𝕂) will also be used as mappings M:𝕂n𝕂n;𝒗M𝒗. When making a base change in 𝕂n, we understand that we perform the corresponding transformations MPMP1 on all matrices under consideration. We denote Diag(X1,,Xp) for the diagonal matrix with entries X1,,Xp. The Xi may either be scalars or square matrices. Given a matrix MMatn(𝕂) and a vector 𝒗𝕂n, we write 𝒗M for the vector 𝒘 with wi=v1Mi,1vnMi,n for all i.

2.2.Differential Galois groups

Consider a monic linear differential operator L=n+Ln-1++L0[], where 𝕂 is an algebraically closed subfield of and =𝕂(z). We will denote by 𝒮=𝒮L𝕂{} the finite set of singularities of L (in the case of , one considers the transformation zz1). A Picard-Vessiot extension of is a differential field 𝒦 such that

PV1
𝒦=h1,,hn is differentially generated by and a basis of solutions 𝒉=(h1,,hn)𝒦n to the equation Lf=0.
PV2
𝒦 has 𝕂 as its field of constants.

A Picard-Vessiot extension always exists: given a point z0𝕂𝒮 and i{1,,n}, let hi be the unique solution to Lf=0 with hi(j)(z0)=δi,j+1 for j{0,,n-1}. We call 𝒉=𝒉z0=(h1,,hn) the canonical basis for the solution space of Lf=0 at z0, and regard 𝒉 as a column vector. Taking 𝒦=h1,,hn, the condition PV2 is trivially satisfied since 𝒉(z0+ε)𝕂[[ε]]𝕂((ε)) and the constant field of 𝕂((z)) is 𝕂.

Let 𝒦 be a Picard-Vessiot extension of and let 𝒉𝒦n be as in PV1. The differential Galois group 𝒢𝒦/ of the extension 𝒦/ is the group of differential automorphisms which leave pointwise invariant. It is classical [Kol73] that 𝒢𝒦/ is independent (up to isomorphism) of the particular choice of the Picard-Vessiot extension 𝒦.

Given an automorphism σ𝒢𝒦/, any solution f to Lf=0 is sent to another solution. In particular, there exists a unique matrix M=Mσ,𝒉GLn(𝕂) with σhi=Mhij=1nMi,jhj for all i. This yields an embedding ρ𝒉 of 𝒢𝒦/ into GLn(𝕂) and we define 𝒢L,𝒉ρ𝒉(𝒢𝒦/). Conversely, MGLn(𝕂) belongs to 𝒢L,𝒉 if every differential relation P(h1,,hn)=0 satisfied by h1,,hn is also satisfied by Mh1,,Mhn (with P𝕂{F1,,Fn}). Since this assumption constitutes an infinite number of algebraic conditions on the coefficients of M, it follows that 𝒢L,𝒉 is a Zariski closed algebraic matrix group. Whenever 𝒈=P𝒉 is another basis, we obtain the same matrix group 𝒢L,𝒈=P𝒢L,𝒉P1 up to conjugation.

Assume now that 𝕂^𝕂 is a larger algebraically closed subfield of . Then the field 𝒦^=𝕂^(z)h1,,hn=𝒦𝕂^ is again a Picard-Vessiot extension of 𝕂^(z). Furthermore, the Ritt-Raudenbush theorem [Rit50] implies that the perfect differential ideal of all P𝕂{F1,,Fn} with P(h1,,hn)=0 is finitely generated, say by G1,,Gk. But then G1,,Gk is still a finite system of generators of the perfect differential ideal of all P𝕂^{F1,,Fn} with P(h1,,hn)=0. Consequently, 𝒢^L,𝒈GLn(𝕂^) (i.e. as an algebraic group over 𝕂^) is determined by the same algebraic equations as 𝒢L,𝒈. We conclude that 𝒢L,𝒉=𝒢^L,𝒉GLn(𝕂).

Let 𝒦 be a Picard-Vessiot extension of . Any differential field with 𝒦 naturally induces an algebraic subgroup '𝒢𝒦/ of automorphisms of 𝒦 which leave fixed. Inversely, any algebraic subgroup of 𝒢𝒦/ gives rise to the differential field ' with '𝒦 of all elements which are invariant under the action of . We say that (resp. ) is closed if ='' (resp. ''=). In that case, the extension / is said to be normal, i.e. every element in is moved by an automorphism of over . The main theorem from differential Galois theory states that the Galois correspondences are bijective [Kap57, Theorem 5.9].

Theorem 2. With the above notations:

  1. The correspondences ' and ' are bijective.
  2. The group is a closed normal subgroup of 𝒢𝒦/ if and only if the extension '/ is normal. In that case, 𝒢𝒦//𝒢'/.

Corollary 3. Let fh1,,hn. If Mf=f for all M𝒢L,𝒉, then f.

2.3.Monodromy

Consider a continuous path γ on {}𝒮 from z0𝕂 to z1𝕂. Then analytic continuation of the canonical basis 𝒉z0 at z0 along γ yields a basis of solutions to Lf=0 at z1. The matrix ΔγGLn(𝕂) with

𝒉z1=Δγ𝒉z0 (1)

is called the connection matrix or transition matrix along γ. In particular, if z1=z0, then we call Δγ a monodromy matrix based in z0. We clearly have

Δγ2γ1=Δγ2Δγ1

for the composition of paths, so the monodromy matrices based in z0 form a group Monoz0 which is called the monodromy group. Given a path γ from z0 to z1, we notice that Monoz1=ΔγMonoz0Δγ1. Since any differential relation satisfied by 𝒉z0 is again satisfied by its analytic continuation along γ, we have Monoz0𝒢L,𝒉z0 and 𝒢L,𝒉z1=Δγ𝒢L,𝒉z0Δγ1.

Remark 4. The definition of transition matrices can be slightly changed depending on the purpose [vdH05b, Section 4.3.1]: when interpreting 𝒉z0 and 𝒉z1 as row vectors, then (1) has to be transposed. The roles of 𝒉z0 and 𝒉z1 may also be interchanged modulo inversion of Δγ.

Now assume that L admits a singularity at 0 (if 𝒮 then we may reduce to this case modulo a translation; singularities at infinity may be brought back to zero using the transformation zz1). It is well-known [Fab85, vH96] that Lf admits a computable formal basis of solutions of the form

f=(f0(zp)++fn-1(zp)logn-1z)zαeP(zp), (2)

with h0,,hn-1𝕂[[z]], p>, α𝕂 and P𝕂[z]. We will denote by 𝕊 the set of finite sums of expressions of the form (2). We may see 𝕊 as a differential subring of a formal differential field of “complex transseries” 𝕋 [vdH01a] with constant field .

We recall that transseries in 𝕋 are infinite linear combinations f=𝔪𝔗f𝔪𝔪 of “transmonomials” with “grid-based support”. The set 𝔗 of transmonomials forms a totally ordered vector space for exponentiation by reals and the asymptotic ordering . In particular, each non-zero transseries f admits a unique dominant monomial 𝔡f. It can be shown [vdH01a] that there exists a unique basis 𝒉=(h1,,hn) of solutions to Lf=0 of the form (2), with h1hn and (hi)𝔡(hj)=δi,j for all i,j{1,,n}. We call 𝒉0=𝒉 the canonical basis of solutions in 0 and there is an algorithm which computes 𝒉 as a function of L.

Let 𝕃 be the subset of 𝕊 of all finite sums of expressions of the form (2) with P=0. Then any f𝕊 can uniquely be written as a finite sum f=𝔢𝔈f𝔢𝔢, where 𝔈=exp(p[zp]). Let Expo0 be the group of all automorphisms σ:𝕊𝕊 for which there exists a mapping λ:𝔈𝕂;𝔢λ𝔢 with σ(f)=𝔢𝔈λ𝔢f𝔢𝔢 for all f𝕊. Then every σ𝕊 preserves differentiation and maps the Picard-Vessiot extension 𝒦=h1,,hn of into itself. In particular, the restriction Expo0,𝒉 of Expo0 to 𝒦 is a subset of 𝒢L,𝒉.

Proposition 5. Assume that f𝕊 is fixed under Expo0. Then f𝕃.

Proof. Assume that f𝕃 and let 𝔢𝔈 be an exponential with f𝔢0. Let be a supplement of the -vector space log𝔈. Let σ:𝕊𝕊 be the mapping in Expo0 which sends 𝔢α𝔣 to eα𝔢α𝔣 for each α and 𝔣exp. Then we clearly have σ(f)f.

Let 𝔢1,,𝔢n be the set of exponents corresponding to the exponents of the elements of the canonical basis 𝒉0. Using linear algebra, we may compute a multiplicatively independent set 𝔣1,,𝔣r𝔢1𝔢n such that 𝔢i=𝔣1βi,1𝔣rβi,r for certain βi,j and all i.

Proposition 6. With the above notations, the algebraic group Expo0,𝒉 is generated by the matrices Diag(λβi,1,,λβi,n) where λ𝕂{μ:n,μn=1} is chosen arbitrarily.

Proof. Let be the group generated by the matrices Diag(λβi,1,,λβi,n). Notice that each individual matrix Diag(λβi,1,,λβi,n) generates 𝒮={Diag(αβi,1,,αβi,n):α𝕂}: assuming 𝔢i1, the variety 𝒮 is irreducible of dimension 1 and Diag(λβi,1,,λβi,n) is not contained in an algebraic group of dimension 0. Now any σExpo0,𝒉 is a diagonal matrix Diag(λ𝔢1,,λ𝔢n) for some multiplicative mapping λ:𝔈𝕂. Hence

Diag(λ𝔢1,,λ𝔢n)=Diag(λ𝔣1β1,1,,λ𝔣1βn,1)Diag(λ𝔣rβ1,r,,λ𝔣rβn,r).

Conversely, each element

σDiag(α1β1,1,,α1βn,1)Diag(αrβ1,r,,αrβn,r)

determines a multiplicative mapping λ:𝔣1𝔣r𝕂;f1k1frkrα1k1αrkr which may be further extended to 𝔈 using Zorn's lemma and the fact that 𝕂 is algebraically closed. It follows that σExpo0,𝒉.

Assume that 2π𝕂 and let M0:𝕊𝕊 be the transformation which sends logz to logz+2π, zα to e2παzα and eP(zp) to eM0(P(zp)). Then σ preserves differentiation, so any solution to Lf=0 of the form (2) is sent to another solution of the same form. In particular, there exists a matrix Δ0 with M0𝒉=Δ0𝒉, called the formal monodromy matrix around 0. We have Δ0𝒢L,𝒉.

Proposition 7. Assume that f𝕊 is fixed under Expo0 and M0. Then f𝕂((z)).

Proof. We already know that f𝕃. Interpreting f=cklogkz++c0 as a polynomial in logz with k>0ck0, we must have k=0 since

M0(f)-f=2πkcklogk-1z+=0.

Consequently, f is of the form f=α𝕂fαzα and

M0(f)=α𝕂e2παfαzα=α𝕂fαzα=f(z),

We conclude that e2πα=1 for every α𝕂 with fα0, whence f((z)).

2.4.The process of accelero-summation

Let [[z>]] be the differential -algebra of infinitesimal Puiseux series in z for δ=z and consider a formal power series solution f𝕆=[[z>]][logz] to Lf=0. The process of accelero-summation enables to associate an analytic meaning f to f in a sector near the origin of the Riemann surface ˙ of log, even in the case when f is divergent. Schematically speaking, we obtain f through a succession of transformations:

f f z1 zpαp f^1 𝒜z1z2α1 f^2 f^p-1 𝒜zp-1zpαp-1 f^n (3)

Each f^i is a “resurgent function” which realizes fi(zi)=f(z) in the “convolution model” with respect to the i-th “critical time” zi=zki (with ki> and k1>>kp). In our case, f^i is an analytic function which admits only a finite number of singularities above . In general, the singularities of a resurgent function are usually located on a finitely generated grid. Let us describe the transformations , 𝒜zizi+1αi and zpαp in more detail.

The Borel transform
We start by applying the formal Borel transform to the series f1(z1)=f(z)=σ,rf1,σ,rz1σlogrz1[[z1>]][logz1]. This transformation sends each z1σlogrz1 to

(z1z1σlogrz1)(ζ1)=ζ1σ-1i=0r( k i )γ(r-i)(σ)logiζ1,

where γ(σ)=1/Γ(σ), and extends by strong linearity:

f^1(ζ1)=(z1f1)(ζ1)= σ> r f1,r,σ(z1z1σlogrz1)(ζ1),

The result is a formal series f^1ζ11[[ζ1>]][logζ1] in ζ1 which converges near the origin of ˙. The formal Borel transform is a morphism of differential algebras which sends multiplication to the convolution product, i.e. z1(fg)=(z1f)(z1g).

Accelerations
Given i<p, the function f^i is defined near the origin of ˙, can be analytically continued on the axis eαi>˙, and admits a growth of the form f^i(ζi)=expO(|ζi|ki/(ki-ki+1)) at infinity. The next function f^i+1 is obtained from f^i by an acceleration of the form

f^i+1(ζi+1)=(𝒜zizi+1αif^i)(ζi+1)=ζieαi>Kki,ki+1(ζi,ζi+1)f^i(ζi)ζi,

where the acceleration kernel Kki,ki+1 is given by

Kki,ki+1(ζi,ζi+1) = 1ζi+1Kki+1/ki(ζiζi+1ki+1/ki) Kλ(ζ) = 12πc-c+ez-ζzλz.(4)

For large ζ on an axis with |argζ|<(1-λ)π/2, it can be shown that Kλ(ζ)exp(C|ζ|1/(1-λ)) for some constant C>0. Assuming that αi+1 satisfies

|ki+1αi+1-kiαi|<(ki-ki+1)π/2, (5)

it follows that the acceleration f^i+1 of f^i is well-defined for small ζi+1 on eαi+1>. The set 𝒟i of directions α such f^i admits a singularity on eα> is called the set of Stokes directions. Accelerations are morphisms of differential -algebras which preserve the convolution product.

The Laplace transform
The last function f^p is defined near the origin of ˙, can be analytically continued on the axis eαi>˙ and admits at most exponential growth at infinity. The function f is now obtained using the analytic Laplace transform

f(z)=fp(zp)=(zpαpf^p)(zp)=ζpeαp>f^p(ζp)eζp/zpζp.

On an axis with

|argzp-αp|<π/2, (6)

the function fp is defined for all sufficiently small zp. The set 𝒟p of Stokes directions is defined in a similar way as in the case of accelerations. The Laplace transform is a morphism of differential -algebras which is inverse to the Borel transform and sends the convolution product to multiplication.

Remark 8. Intuitively speaking, one has 𝒜zizi+1αi=zi+1ziαi.

Given critical times k1>>kp in > and directions α1,,αp satisfying (5), we say that a formal power series f𝕆 is accelero-summable in the multi-direction 𝛂=(α1,,αp) if the above scheme yields an analytic function f(z) near the origin of any axis on ˙ satisfying (6). We denote the set of such power series by 𝕆𝒌,𝛂, where 𝒌=(k1,,kp). Inversely, given f𝕆, we denote by domasf the set of all triples γ=(𝒌,𝛂,z) such that f𝕆𝒌,𝛂 and so that f(z) is well-defined. In that case, we write f=sum𝒌,𝛂 and f(z)=f(γ).

The set 𝕆𝒌,𝛂 forms a differential subring of 𝕆 and the map ff for f𝕆𝒌,𝛂 is injective. If 𝒌' and 𝛂' are obtained from 𝒌 and 𝛂 by inserting a new critical time and an arbitrary direction, then we have 𝕆𝒌,𝛂⊊︀𝕆𝒌',𝛂'. In particular, 𝕆𝒌,𝛂 contains 𝕆cv={{z>}}[logz], where {{z>}} denotes the ring of convergent infinitesimal Puiseux series. Let 1=𝒟1,,p=𝒟p be sets of directions such that each 𝒟i is finite modulo 2π. Let 𝓡 be the subset of 1××p of multi-directions 𝛂 which verify (5). We denote 𝕆𝒌,𝓡=𝛂𝓡𝕆𝒌,𝛂, 𝕆𝒌=𝓡𝕆𝒌,𝓡 and 𝕆as=𝒌𝕆𝒌.

Taking 𝕂=, the notion of accelero-summation extends to formal expressions of the form (2) and more general elements of 𝕊 as follows. Given g𝕆𝒌,𝛂, σ, 𝔢=eP(zp)𝔈 and γ=(𝒌,𝛂,z)domasg, we simply define (gzσ𝔢)(γ)=g(γ)zσeP(zp). It can be checked that this definition is coherent when replacing gzσ by (zkg)zσ-k for some k. By linearity, we thus obtain a natural differential subalgebra 𝕊𝒌,𝛂𝕊 of accelero-summable transseries with critical times 𝒌 and in the multi-direction 𝛂. We also have natural analogues 𝕊𝒌 and 𝕊as of 𝕆𝒌 and 𝕆as.

The main result we need from the theory of accelero-summation is the following theorem [É87, Bra91].

Theorem 9. Let f𝕆 be a formal solution to Lf=0. Then f𝕆as.

Corollary 10. Let 𝒉0𝕊n be the canonical basis of formal solutions to Lf=0 at the origin. We have 𝒉0𝕊asn.

Proof. Holonomy is preserved under multiplication with elements of z𝔈.

Remark 11. We have aimed to keep our survey of the accelero-summation process as brief as possible. It is more elegant to develop this theory using resurgent functions and resurgence monomials [É85, CNP93].

2.5.The Stokes phenomenon

We say that f𝕊𝒌,𝓡 is stable under Stokes morphisms is for all 𝛂,𝛃𝓡, there exists a g𝕊𝒌,𝓡 with sum𝒌,𝛂f=sum𝒌,𝛃g, and if the same property is recursively satisfied by g. We denote by 𝕊˘𝒌,𝓡 the differential subring of 𝕊𝒌,𝓡 which is stable under Stokes morphisms. The mappings Σ𝒌,𝛂,𝛃:fsum𝒌,𝛃1sum𝒌,𝛂f will be called Stokes morphisms and we denote by Sto0,𝒌,𝓡 the group of all such maps.

Proposition 12. Assume that f𝕊𝒌,𝓡 is fixed under Sto0,𝒌,𝓡. Then f is convergent.

Proof. Assume that one of the f^i admits a singularity at ω=ρeθ0 and choose i maximal and ρ minimal. Modulo the removal of unnecessary critical times, we may assume without loss of generality that i=p. Let 𝛂 with αp=θ be a multi-direction satisfying (5), such that αii for all i<p. Then

𝛂< = (α1,,αp-1,αp-ε)𝓡 𝛂> = (α1,,αp-1,αp+ε)𝓡

for all sufficiently small ε>0. Now gp=zpθ+εf^p-zpθ-εf^p is obtained by integration around ω along the axis eθ>. By classical properties of the Laplace integral [CNP93, Pré I.2], the function gp cannot vanish, since f^i admits a singularity in ω (if the Laplace integrals corresponding to both directions θ±ε coincide, then the Laplace transform can be analytically continued to a larger sector, which is only possible if f^i is analytic in a sector which contains both directions θ±ε). We conclude that g(z)=gp(zp)=(sum𝒌,𝛂>-sum𝒌,𝛂<)(f)0, so f is not fixed under Sto0,𝒌,𝓡.

Remark 13. Let 𝓓 be a set of multi-directions 𝛂 satisfying (5), with αi𝒟i for exactly one i, and so that for all ji, we have either αj=kiαi/kj or kiαi/kj𝒟j and αj=ki(αi±ε)/kj for some small ε>0. For every 𝛂𝓓, we have

𝛂< = (α1,,αi-1,αi-ε,αi+1,,αp)𝓡 𝛂> = (α1,,αi-1,αi+ε,αi+1,,αp)𝓡.

By looking more carefully at the proof of proposition 12, we observe that it suffices to assume that f is fixed under all Stokes morphisms of the form Σ𝒌,𝛂<,𝛂>, instead of all elements in Sto0,𝒌,𝓡.

We say that 𝛂,𝛃𝓓 are equivalent, if αi-βi2πqi for all i, where qi is the denominator of ki. We notice that 𝓓 is finite modulo this equivalent relation. We denote by 𝓓gen a subset of 𝓓 with one element in each equivalence class.

Let us now come back to our differential equation Lf=0. Given γ=(𝒌,𝛂,z)domas𝒉0domash10domashn0, the map sum𝒌,𝛂 induces an isomorphism between Vect(𝒉0) and Vect(𝒉z). We denote by ΔγGLn() the unique matrix with 𝒉z=Δγsum𝒌,𝛂𝒉0. Given a second γ'=(𝒌,𝛂',z)domas𝒉0, the vector sum𝒌,𝛂'1sum𝒌,𝛂𝒉0 is again in 𝕊𝒌,𝓡n, whence 𝒉0𝕊˘𝒌,𝓡 by repeating the argument. In particular, the Stokes morphism Σ𝒌,𝛂,𝛂' induces the Stokes matrix Δ(0,𝒌,𝛂𝛂')=Δγ'1Δγ.

We are now in the position that we can construct a finite set of generators for the Galois group 𝒢L,𝒉z0 in a regular point z0{?}𝒮.

Algorithm Compute_generators(L,z0)

Input: an operator L[] and a regular point z0{?}𝒮

Output: a set of generators for 𝒢L,𝒉z0

for each zi𝒮 do

return

Theorem 14. With constructed as above, the differential Galois group 𝒢L,𝒉z0 is generated by as a closed algebraic subgroup of Matn().

Proof. Assume that fh1z0,,hnz0 is fixed by each element of . We have to prove that f. Given a singularity zi, let g be the “continuation” of f along γi1 (which involves analytic continuation until ui followed by “decelero-unsummation”). By proposition 7, we have g((z)). From proposition 12 and remark 13, we next deduce that g is convergent. Indeed, since g((z)), its realization g^i in the convolution model with critical time zi=z1/ki=zqi/pi is a function in ζiqi. Consequently, Σ𝒌,𝛂<,𝛂>=Σ𝒌,𝛃<,𝛃> whenever 𝛂 and 𝛃 are equivalent. At this point we have shown that f is meromorphic at zi. But a function which is meromorphic at all points of the Riemann sphere {?} is actually a rational function. It follows that f.

Remark 15. Slightly less effective versions of theorem 14 are due to Ramis [Ram85, MR91]. Both papers crucially use previous work by Écalle and we followed a similar approach as in the second paper. In the Fuchsian case, i.e. in absence of divergence, the result is due to Schlesinger [Sch95, Sch97].

Remark 16. We have tried to keep our exposition as short as possible by considering only “directional Stokes-morphisms”. In fact, Écalle's theory of resurgent functions gives a more fine-grained control over what happens in the convolution model by considering the pointed alien derivatives Δ˙ω for ω˙. Modulo the identification of functions in the formal model, the convolution models and the geometric model via accelero-summation, the pointed alien derivatives commute with the usual derivation . Consequently, if f is a solution to Lf=0, then we also have LΔ˙ωf=0. In particular, given the canonical basis of solutions 𝒉0 to Lf=0, there exists a unique matrix Bω with

Δ˙ω𝒉0=Bω𝒉0.

This equation is called the bridge equation. Since f^i admits only a finite number of singularities and the alien derivations “translate singularities”, we have Δ˙ωl𝒉0=0 for some l, so the matrices Bω are nilpotent. More generally, if ω1,,ωr are -linearly independent, then all elements in the algebra generated by Bω1,,Bωr are nilpotent.

It is easily shown that the Stokes morphisms correspond to the exponentials eΔ˙θ of directional Alien derivations Δ˙θ=ωeθ>Δ˙ω. This yields a way to reinterpret the Stokes matrices in terms of the Bω with ωeθ>. In particular, the preceding discussion implies that the Stokes matrices are unipotent. The extra flexibility provided by pointwise over directional alien derivatives admits many applications, such as the preservation of realness [Men96]. For further details, see [É85, É87, É92, É93].

2.6.Effective complex numbers

A complex number z is said to be effective if there exists an approximation algorithm for z which takes ε>2 on input and which returns an ε-approximation z(+)2 of z for which |z-z|<ε. The time complexity of this approximation algorithm is the time T(d) it takes to compute a 2-d-approximation for z. It is not hard to show that the set eff of effective complex numbers forms a field. However, given zeff the question whether z=0 is undecidable. The following theorems were proved in [CC90, vdH99, vdH01b].

Theorem 17. Let Lalg(z)[], z0alg𝒮, 𝒗(alg)n and f=𝒗𝒉z0. Given a broken line path γ=z0zk on eff𝒮, we have

  1. The value f(γ) of the analytic continuation of f at the end-point of γ is effective.
  2. There exists an approximation algorithm of time complexity O(dlog3dlog2logd) for f(γ), when not counting the approximation time of the input data L, γ and 𝒗.
  3. There exists an algorithm which computes an approximation algorithm for f(γ) as in (b) as a function of L, γ and 𝒗.

Theorem 18. Let Lalg(z)[] be regular singular in 0. Let z0alg𝒮, 𝒗(alg)n and f=𝒗𝒉z0. Then f(z) is well-defined for all sufficiently small γ on the effective Riemann surface ˙eff of log above eff, and

  1. f(γ) is effective.
  2. There exists an approximation algorithm of time complexity O(dlog3dlog2logd) for f(γ), when not counting the approximation time of the input data L, γ and 𝒗.
  3. There exists an algorithm which computes an approximation algorithm for f(γ) as in (b) as a function of L, γ and 𝒗.

In general, the approximation of f(γ) involves the existence of certain bounds. In each of the above theorems, the assertion (c) essentially states that there exists an algorithm for computing these bounds as a function of the input data. This property does not merely follow from (a) and (b) alone.

The following theorem has been proved in [vdH05b].

Theorem 19. Let Lalg(z)[] be singular in 0. Let 𝒌 be as in theorem 10 and Γ={(𝒌,𝛂,z)domas𝒉0:α1,,αp,zeff}. Given f=𝒗𝒉0 with 𝒗(eff)n and γΓ, we have

  1. f(γ) is effective.
  2. There exists an approximation algorithm of time complexity O(dlog4dloglogd) for f(γ), when not counting the approximation time of the input data L, γ and 𝒗.
  3. There exists an algorithm which computes an approximation algorithm for f(γ) as in (b) as a function of L, γ and 𝒗.

If we replace alg by an arbitrary effective algebraically closed subfield 𝕂 of eff, then the assertions (a) and (c) in the three above theorems remain valid (see [vdH05a, vdH03] in the cases of theorems 17 and 18), but the complexity in (b) usually drops back to O(d2logO(1)d). Notice also that we may replace f(γ) by the transition matrix along γ in each of the theorems. The following theorem summarizes the results from section 2.

Theorem 20. Let 𝕂 be an effective algebraically closed constant field of eff. Then there exists an algorithm which takes L𝕂(z)[] and z0𝕂{} on input, and which computes a finite set Mat(eff), such that

  1. The group 𝒢L,𝒉z0 is generated by as a closed algebraic subgroup of Matn().
  2. If 𝕂=alg, then the entries of the matrices in have time complexity O(dlog4dloglogd).

Proof. It is classical that the set eff of effective complex numbers forms a field. Similarly, the set fast of effective complex numbers with an approximation algorithm of time complexity O(dlog4dloglogd) forms a field, since the operations +, -, × and / can all be performed in time O(dlogdloglogd). In particular, the classes of matrices with entries in eff resp. fast are stable under the same operations. Now in the algorithm Compute_generators, we may take broken-line paths with vertices above 𝕂 for the γi. Hence (a) and (b) follow from theorem 19(a) resp. (b) and the above observations.

Given ε>2, we may endow eff with an approximate zero-test for which z=0 if and only if |z|<ε. We will denote this field by ε. Clearly, this zero-test is not compatible with the field structure of eff. Nevertheless, any finite computation, which can be carried out in eff with an oracle for zero-testing, can be carried out in exactly the same way in ε for a sufficiently small ε. Given zeff, we will denote by zεε the “cast” of z to ε and similarly for matrices with coefficients in eff.

Remark 21. In practice [vdH04], effective complex numbers z usually come with a natural bound M>2 for |z|. Then, given ε>2 with ε<1, it is even better to use the approximate zero-test z=0 if and only if |z|<εM. Notice that the bound M usually depends on the internal representation of z and not merely on z as a number in eff.

3.Factoring linear differential operators

Let 𝕂 be an effective algebraically closed subfield of eff. Consider a monic linear differential operator L=n+Ln-1++L0[], where =𝕂(z). In this section, we present an algorithm for finding a non-trivial factorization L=K1K2 with K1,K2[] whenever such a factorization exists.

3.1.Factoring L and invariant subspaces under 𝒢L,𝒉

Let 𝒉=(h1,,hn)𝒦n be a basis of solutions for the equation Lf=0, where 𝒦 is an abstract differential field. We denote the Wronskian of 𝒉 by

W𝒉=Wh1,,hn=| h1 hn h1(n-1) hn(n-1) |.

It is classical (and easy to check) that

Lf=Wf,h1,,hnWh1,,hn. (7)

When expanding the determinant Wf,h1,,hn in terms of the matrices

Wi=W𝒉,i=| h1 hn h1(n-i-1) hn(n-i-1) h1(n-i+1) hn(n-i+1) h1(n-1) hn(n-1) |,

it follows that

L=n-W1W0n-1++(1)nWnW0.

Denoting by φ the logarithmic derivative of φ, it can also be checked by induction that

L=(-(Wh1,,hnWh1,,hn-1))(-(Wh1,h2Wh1))(-Wh1)

admits h1,,hn as solutions, whence L=L, using Euclidean division in the skew polynomial ring [].

Proposition 22.

  1. If L admits a factorization L=K1K2, then 𝒢L,𝒉 leaves kerK2 invariant.
  2. If V is an invariant subvector space of 𝒢L,𝒉, then L admits a factorization L=K1K2 with V=kerK2.

Proof. Assume that L admits a factorization L=K1K2. Then, given fkerK2 and M𝒢L,𝒉, we have K2f=0=MK2f=K2Mf, whence MfkerK2. Conversely, assume that V is an invariant subvector space of 𝒢L,𝒉 and let 𝒈 be a basis of V. Then we observe that MW𝒈,i=(detM)W𝒈,i for all i. Consequently,

MW𝒈,iW𝒈,0=W𝒈,iW𝒈,0

for all i, so that W𝒈,i/W𝒈,0, by corollary 3. Hence

K2=r-W𝒈,1W𝒈,0r-1++(1)rW𝒈,rW𝒈,0 (8)

is a differential operator with coefficients in which vanishes on V. But this is only possible if K2 divides L.

3.2.A lemma from linear algebra

Lemma 23. Let 𝒜 be a non-unitary algebra of nilpotent matrices in Matn(𝕂). Then there exists a basis of 𝕂n in which M is lower triangular for all M𝒜.

Proof. Let M𝒜 be a matrix such that V=imM is a non-zero vector space of minimal dimension. Given 𝒗V and N𝒜, we claim that N𝒗kerM. Assume the contrary, so that 0MN𝒗imMNV. By the minimality hypothesis, we must have imMN=V. In particular, 𝒗imMN and 0MN𝒗imMNMN. Again by the minimality hypothesis, it follows that imMNMN=V. In other words, the restriction of MN to V is an isomorphism on V. Hence MN admits a non-zero eigenvector in V, which contradicts the fact that MN is nilpotent.

Let us now prove the lemma by induction over n. If n1 or 𝒜=0, then we have nothing to do, so assume that n>1 and 𝒜0. We claim that 𝕂n admits a non-trivial invariant subvector space W. Indeed, we may take W=V if 𝒜V=0 and W=𝒜V if 𝒜V0. Now consider a basis (𝒃m+1,,𝒃n) of W and complete it to a basis (𝒃1,,𝒃n) of 𝕂n. Then each matrix in 𝒜 is lower triangular with respect to this basis. Let 𝒜1 and 𝒜2 be the algebras of lower dimensional matrices which occur as upper left resp. lower right blocks of matrices in 𝒜. We conclude by applying the induction hypothesis on 𝒜1 and 𝒜2.

Let be a finite set of non-zero nilpotent matrices. If all matrices in the 𝕂-algebra 𝒜 generated by are nilpotent, then it is easy to compute a basis for which all matrices in are lower triangular. Indeed, setting Ki=MikerM for all i, we first compute a basis of K1. We successively complete this basis into a basis of K2, K3 and so on until Kp=𝕂n.

If not all matrices in 𝒜 are nilpotent, then the proof of lemma 23 indicates a method for the computation of a matrix in 𝒜 which is not nilpotent. Indeed, we start by picking an M and set VimMp-1, where p is smallest with Mp=0. We next set 𝒩{M} and iterate the following loop. Take a matrix N𝒩 and distinguish the following three cases:

MNV=0
Set 𝒩𝒩{N} and continue.
0⊊︀MNV⊊︀V
Set MMNM, VimM and continue.
MNV=V
Return the non-nilpotent matrix MN.

At the end of our loop, we either found a non-nilpotent matrix, or we have NVkerM for all N. In the second case, we obtain a non-trivial invariant subspace of 𝕂n as in the proof of lemma 23 and we recursively apply the algorithm on this subspace and a complement. In fact, the returned matrix is not even monopotent (i.e. not of the form λ+N, where N is a nilpotent matrix), since it both admits zero and a non-zero number as eigenvalues.

3.3.Computation of non-trivial invariant subspaces

Proposition 22 in combination with theorem 20 implies that the factorization of linear differential operators in [] reduces to the computation of non-trivial invariant subvector spaces under the action of L,𝒉 whenever they exist.

In this section, we will first solve a slightly simpler problem: assuming that 𝕂 is an effective algebraically closed field and given a finite set of matrices Matn(𝕂), we will show how to compute a non-trivial invariant subspace V of 𝕂n under the action of , whenever such a V exists.

Good candidate vectors 𝒗
Given a vector 𝒗𝕂n it is easy to compute the smallest subspace Inv(𝒗) of 𝕂n which is invariant under the action of and which contains 𝒗. Indeed, starting with a basis ={𝒗}, we keep enlarging with elements in Vect() until saturation. Since will never contain more than n elements, this algorithm terminates. A candidate vector 𝒗𝕂n for generating a non-trivial invariant subspace of 𝕂n is said to be good if 0<dimInv(𝒗)<n.

The 𝕂-algebra generated by
We notice that V𝕂n is an invariant subspace for , if and only if V is an invariant subspace for the 𝕂-algebra Alg() generated by . Again it is easy to compute a basis for Alg(). We start with a basis of Vect() and keep adjoining elements in 2Vect() to until saturation. We will avoid the explicit basis of Alg(), which may contain as much as n2 elements, and rather focus on the efficient computation of good candidate vectors.

-splittings
A decomposition 𝕂n=E1Ek, where E1,,Ek are non-empty vector spaces, will be called an -splitting of 𝕂n, if the projections Pi=PEi of 𝕂n on Ei are all in Alg(). Then, given MMatn(𝕂), we have MAlg() if and only if PiMPjAlg() for all i,j. If we choose a basis for 𝕂n which is a union of bases for the Ei, we notice that the PiMPj are dimEi×dimEj block matrices. In the above algorithm for computing the 𝕂-algebra generated by it now suffices to compute with block matrices of this form. In particular, the computed basis of Alg() will consist of such matrices. The trivial decomposition 𝕂n=𝕂n is clearly an -splitting. Given NAlg(), we notice that any {N}-splitting is also an -splitting.

Refining -splittings
An -splitting 𝕂n=F1Fl is said to be finer than the -splitting 𝕂n=E1Ek if Ei is a direct sum of a subset of the Fj for each i. Given an -splitting 𝕂n=F1Fl and an arbitrary element MAlg(), we may obtain a finer -splitting w.r.t M as follows. Let i{1,,k} and consider Mi=PiMPi. If λ1,,λp are the eigenvalues of Mi, then Ei=ker(Mi-λ1)niker(Mi-λk)ni is an (PiPi)-splitting of Ei, where ni=dimEi. Collecting these (PiPi)-splittings, we obtain a finer -splitting F1Fl of 𝕂n. This -splitting, which is said to be refined w.r.t. M, has the property that PFiMPFi is monopotent on Fi for each i, with unique eigenvalue λM,Fi.

We now have the following algorithm for computing non-trivial -invariant subspaces of 𝕂n when they exist.

Algorithm Invariant_subspace()

Input: a set of non-zero matrices in Matn(𝕂)

Output: an -invariant subspace of 𝕂n or fail

Step 1. [Initial -splitting]

Compute a “random non-zero element” N of Alg()

Compute an -splitting 𝕂n=E1Ek w.r.t. N and each M

𝒟

Step 2. [One dimensional components]

For every Ei with dimEi=1 and Ei𝒟, do the following:

Pick a 𝒗Ei and compute Inv(𝒗)

If Inv(𝒗)⊊︀𝕂n then return Inv(𝒗)

Otherwise, set 𝒟𝒟Ei

If dimEi=1 for all i then return fail

Step 3. [Higher dimensional components]

Let i be such that dimEi>1

Let i{Pi(M-λM,Ei)Pi:M}

Let KiEiMikerM

If Ki=0 then go to step 4 and otherwise to step 5

Step 4. [Non-triangular case]

Let NAlg(i) be non-monopotent on Ei (cf. previous section)

Refine the -splitting w.r.t. N and return to step 2

Step 5. [Potentially triangular case]

Choose 𝒗Ki and compute Inv(𝒗)

If Inv(𝒗)⊊︀𝕂n then return Inv(𝒗)

Otherwise, let N be in Alg() with PiN𝒗Ki

Refine the -splitting w.r.t. N

If this yields a finer -splitting then return to step 2

Otherwise, set {N} and repeat step 5

The algorithm needs a few additional explanations. In step 1, we may take N to be an arbitrary element in . However, it is better to take a “small random expression in the elements of ” for N. With high probability, this yields an -splitting which will not need to be refined in the sequel. Indeed, the subset of matrices in Alg() which yield non-maximal -splittings is a closed algebraic subset of measure zero, since it is determined by coinciding eigenvalues. In particular, given an -splitting 𝕂n=E1Ek w.r.t. N, it will usually suffice to check that each M is monopotent on each Ei, in order to obtain an -splitting w.r.t. the other elements in .

Throughout the algorithm, the -splitting gets finer and finer, so the -splitting ultimately remains constant. From this point on, the space Ki can only strictly decrease in step 5, so Ki also remains constant, ultimately. But then we either find a non-trivial invariant subspace in step 5, or all components of the -splitting become one-dimensional. In the latter case, we either obtain a non-trivial invariant subspace in step 1, or a proof that Inv(𝒗)=𝕂n for every 𝒗E1En (and thus for every 𝒗𝕂n0).

Remark 24. Assume that 𝕂 is no longer an effective algebraically closed field, but rather a field ε with an approximate zero-test. In that case, we recall that a number which is approximately zero is not necessarily zero. On the other hand, a number which is not approximately zero is surely non-zero. Consequently, in our algorithm for the computation of Inv(𝒗), the dimension of Inv(𝒗) can be too small, but it is never too large. In particular, if the algorithm Invariant_subspace fails, then the approximate proof that Inv(𝒗)=𝕂n for every 𝒗E1En yields a genuine proof that there are no non-trivial invariant subspaces.

3.4.Factoring linear differential operators

Putting together the results from the previous sections, we now have the following algorithm for finding a right factor of L.

Algorithm Right_factor(L)

Input: L=n+Ln-1n-1++L0𝕂(z)[]

Output: a non-trivial right-factor of L or fail

Step 1. [Compute generators]

Choose z0𝕂𝒮 and let 𝒉=𝒉z0

Compute a finite set GLn(eff) of generators for 𝒢L,𝒉 (cf. theorem 20)

Step 2. [Initial precision]

Tmax(degL0,,degLn-1)+1

δ232

while δ'min{Mi,j/Mi',j':M,Mi',j'δ/2T0}<δ do δδ'

εδ/2T

Step 3. [Produce invariant subspace]

Let V(ε)

If V= then return fail

Let BMatn,r(ε) be a column basis of V

Let 𝒈Bt𝒉([eff]ε[[z]]eff)r

Step 4. [Produce and check guess]

Let Kr-W𝒈,1W𝒈,0r-1++(1)rW𝒈,rW𝒈,0

Divide L by K, producing Q,R[ε((z))]eff[] with L=QK+R

If R0modzT then go to step 5

Reconstruct Q,K𝕂(z)[] from Q and K with precision (ε,T)

If we obtain no good approximations or LQK then go to step 5

Return K

Step 5. [Increase precision]

T2T

εδ/2T

Go to step 3

The main idea behind the algorithm is to use proposition 22 in combination with Invariant_subspace so as to provide good candidate right factors of L in eff((z))[]. Using reconstruction of coefficients in 𝕂(z) from Laurent series in eff((z)) with increasing precisions, we next produce good candidate right factors in 𝕂(z). We keep increasing the precision until we find a right factor or a proof that L is irreducible. Let us detail the different steps a bit more:

Step 2
We will work with power series approximations of T terms and approximate zero-tests in ε. The degree of a rational function P/Q is defined by degP/Q=max(degP,degQ). The initial precisions T and logε have been chosen as small as possible. Indeed, we want to take advantage of a possible quick answer when computing with a small precision (see also the explanations below of step 5).
Step 3
If Invariant_subspace fails, then there exists no factorization of L, by remark 24. Effective power series and Laurent series are defined in a similar way as effective real numbers (in particular, we don't assume the existence of an effective zero-test). Efficient algorithm for such computations are described in [vdH02].
Step 4
The reconstruction of Q and K from Q and K contains two ingredients: we use Padé approximation to find rational function approximations of degree T and the LLL-algorithm to approximate numbers ε by numbers in 𝕂.
Step 5
Doubling the precision at successive steps heuristically causes the computation time to increase geometrically at each step. In particular, unsuccessful computations at lower precisions don't take much time with respect to the last successful computation with respect to the required precision. Instead of multiplying the precisions by two, we also notice that it would be even better to increase by a factor which doubles the estimated computation time at each step. Of course, this would require a more precise complexity analysis of the algorithm.

The problem of reconstructing elements in 𝕂 from elements in ε is an interesting topic on its own. In theory, one may consider the polynomial algebra over generated by all coefficients occurring in L and the number z we wish to reconstruct. We may then apply the LLL-algorithm [LLL82] on the lattice spanned by T monomials of smallest total degree (for instance) and search for minimal T-digit relations. If 𝕂=alg is the algebraic closure of , then we may simply use the lattice spanned by the first n powers of z.

At a sufficiently large precision T, the LLL-algorithm will ultimately succeed for all coefficients of a candidate factorization which need to be reconstructed. If there are no factorizations, then the algorithm will ultimately fail at step 3. This proofs the termination of Right_factor.

Remark 25. In practice, and especially if 𝕂alg, it would be nice to use more of the structure of the original problem. For instance, a factorization of L actually yields relations on the coefficients which we may try to use. For high precision computations, it is also recommended to speed the LLL-algorithm up using a similar dichotomic algorithm as for fast g.c.d. computations [Moe73, PW02].

Remark 26. Notice that we did not use bounds for the degrees of coefficients of possible factors in our algorithm. If a bound TB is available, using techniques from [BB85, vH97, vdPS03], then one may take Tmin(2T,TB) instead of T2T in step 5. Of course, bounds for the required precision ε are even harder to obtain. See [BB85] for some results in that direction.

4.Computing differential Galois groups

4.1.Introduction

Throughout this section, 𝔽 will stand for the field ε of effective complex number with the approximate zero-test at precision ε>0. This field has the following properties:

EH1
We have an effective zero-test in 𝔽.
EH2
There exists an algorithm which takes on input 𝒄(𝔽)n and which computes a finite set of generators for the -vector space of integers 𝒌n with 𝒄𝒌=1.
EH3
There exists an algorithm which takes on input 𝒄𝔽n and which computes a finite set of generators for the -vector space of integers 𝒌n with 𝒄𝒌=0.
EH4
𝔽 is closed under exponentiation and logarithm.

Indeed, we obtain EH2 and EH3 using the LLL-algorithm. Some of the results in this section go through when only a subset of the conditions are satisfied. In that case, we notice that EH2EH1, EH3EH1 and EH4(EH2EH3).

Given a finite set of matrices GLn(𝔽), we give a numerical algorithm for the computation of the smallest closed algebraic subgroup 𝒢= of GLn(𝔽) which contains . We will represent 𝒢 by a finite set GLn(𝔽) and the finite basis GL(𝔽) of a Lie algebra over , such that

𝒢=e,

and each N corresponds to a unique connected component Ne=eN of 𝒢. We will also prove that there exists a precision ε0 such that the algorithm yields the theoretically correct result for all ε<ε0.

4.2.The algebraic group generated by a diagonal matrix

Let Torn(𝔽) be the group of invertible diagonal matrices. Each matrix M has the form M=Diag(𝛂), where 𝛂=(α1,,αn) is the vector in (𝔽)n of the elements on the diagonal of M. The coordinate ring of Torn(𝔽) is the set 𝔽[𝛂,𝛂1] of Laurent polynomials in 𝛂.

Now consider the case when consists of a single diagonal matrix M=Diag(𝛌). Let 𝔦 be the ideal which defines MTorn(𝔽). Given a relation 𝛌𝒌=1(𝒌n) between the λi, any power Mi=Diag(𝛌i) satisfies the same relation (𝛌i)𝒌=1, whence 𝛂𝒌-1𝔦. Let 𝔧 be the ideal generated by all 𝛂𝒌-1, such that 𝛌𝒌=1.

Lemma 27. We have 𝔧=𝔦.

Proof. We already observed that 𝔧𝔦. Assuming for contradiction that 𝔧𝔦, choose

f=i=1rfi𝛂𝒌i𝔦𝔧

such that r is minimal. If ij, then 𝛌𝒌i-𝒌j1, since otherwise 𝛌𝒌i-𝒌j𝔧 and f-fi𝛂𝒌i+fi𝛂𝒌j𝔦𝔧 has less than r terms. In particular, the vectors (1,𝛌𝒌i,,𝛌(r-1)𝒌i) with i{1,,r} are linearly independent. But this contradicts the fact that f(𝛌j)=i=1rfi𝛌j𝒌i=0 for all j{0,,r-1}.

By EH2, we may compute a minimal finite set 𝒈1,,𝒈p of generators for the -vector space of 𝒌n with 𝛌𝒌=1. We may also compute a basis for kerφ, where φ:np;𝒌(𝒌𝒈1,,𝒌𝒈p). Then e=eVect() is the connected component of M, since (e)𝒈i=1 for all i, and cannot be further enlarged while conserving this property.

Let 𝒱=(𝒈1𝒈p)n. We construct a basis 𝒉1,,𝒉n of n, by taking 𝒉i to be shortest in 𝒱 (if ip) or n (if i>p), such that 𝒉iVect(𝒉1,,𝒉i-1). This basis determines a toric change of coordinates 𝛂𝛂P with PGLn() such that 𝒈1,,𝒈pp×0n-p with respect to the new coordinates. Similarly, we may construct a basis 𝒃1,,𝒃p of p, by taking each 𝒃i to be shortest in pVect(𝒃1,,𝒃i-1) such that ri=min{r>:r𝒃i𝒈1𝒈p} is maximal. This basis determines a second toric change of coordinates 𝛂𝛂Q with QGLn() such that 𝒈i=ri𝒆i (i=1,,p) with respect to the new coordinates.

After the above changes of coordinates, the ideal 𝔧 is determined by the equations α1r1==αprp=1. Setting

={(e2πs1/r1,,e2πsp/rp,1,,1):𝒔p,s1<r1,,sp<rp},

it follows that M=e. Rewriting with respect to the original coordinates now completes the computation of M.

4.3.The algebraic group generated by a single matrix

Let us now consider the case when consists of a single arbitrary matrix M. Then we first compute the multiplicative Jordan decomposition of M. Modulo a change of basis of 𝔽n, this means that

M=DU=UD,

where D=Ms and U=Mu are the semi-simple and unipotent parts of M:

D=( λ1In1 λkInk ),U=( Jn1 Jnk ),

where

Jn=( 1 1 1 1 1 ).

Proposition 28. We have U={exp(μlogU):μ𝔽}.

Remark. Notice that f(N)=i=0fiNi=i=0n-1fiNi is well-defined for power series f𝔽[[z]] and nilpotent matrices NMatn(𝔽); in this case, f=(1+z)μ and N=U-1.

Proof. The assertion is clear if U=In, so assume UIn. Let 𝒳={exp(μlogU):μ𝔽}. We clearly have U𝒳, since 𝒳 is a closed algebraic group which contains U. Moreover, the set U,U2,U3, is infinite, so dimU1. Since 𝒳 is irreducible and dim𝒳=1, we conclude that U=𝒳.

Proposition 29. We have M=DU.

Proof. Since M is a commutative group, [Hum81, Theorem 15.5] implies that M=MsMu, where Ms={Ns:NM} and Mu={Nu:NM} are closed subgroups of M. Now D and U are closed subgroups of Ms resp. Mu, so DU is a closed subgroup of M. Since MDU, it follows that M=DU.

Corollary 30. If D=e, then M=e+𝔽logU.

4.4.Membership testing for the connected component

In order to compute the closure of the product of a finite number of algebraic groups of the form e, an important subproblem is to test whether a given matrix MGLn(𝔽) belongs to e.

We first observe that Me implies Me. After the computation of ' and ' with M='e' it therefore suffices to check that ' and 'e. In fact, it suffices to check whether M'e, where M' is the unique matrix in ' with MM'e'. Modulo a suitable base change, we have thus reduced the general problem to the case when M is a diagonal matrix whose eigenvalues are all roots of unity.

Assume that Me and are such that Me. Since M and commute, it follows that M and can be diagonalized w.r.t. a common basis. The elements of this basis are elements of the different eigenspaces of M. In other words, if M=Diag(λ1In1,,λkInk) with pairwise distinct λi, then P1P is diagonal for some block matrix P=Diag(P1,,Pk) with PiGLni(𝔽) for each i. It follows that =Diag(1,,k) for certain iMatni(𝔽). Without loss of generality, we may therefore replace by the intersection of with Diag(Matn1(𝔽),,Matnk(𝔽)).

From now on, we assume that the above two reductions have been made. Let =Diag(μ1,,μn) be a diagonal matrix in . By lemma 27, we have Me if and only if any -linear relation 𝒍𝛍=0 induces a relation 𝛌π(𝒍)=1, where π(𝒍)=(l1++ln1,,ln-nk++ln). Now consider a random matrix R in , i.e. a linear combination of the basis elements with small random integer coefficients. We compute its blockwise Jordan normal form J=P1RP so that PDiag(GLn1(𝔽),,GLnk(𝔽)) and let be the restriction of J to the diagonal. We have MeMeJM=PMP1eR. Computing a basis for the -linear relations of the form 𝒍𝛍=0 using EH3, the above criterion now enables us to check whether MeR.

If the check whether MeR succeeds, then we are clearly done. Otherwise, since R was chosen in a random way, the relation 𝒍𝛍 is very likely to be satisfied for all possible choices of R (up to permutations of coordinates inside each block). Indeed, the R for which this is not the case lie on a countable union 𝒰 of algebraic variety of a lower dimension, so 𝒰 has measure 0. Heuristically speaking, we may therefore conclude that Me if the check fails (at least temporarily, modulo some final checks when the overall computation of will be completed).

Theoretically speaking, we may perform the above computations with R'=BαBB instead of R, where is a basis of and the αB are formal parameters. We then check whether the relation 𝒍𝛍' is still satisfied for the analogue '=Diag(μ1',,μn') of . If so, then we are sure that Me. Otherwise, we keep trying with other random elements of .

It is likely that a more efficient theoretical algorithm can be designed for testing -linear relations between the eigenvalues of elements in . One of the referees suggested to use similar methods as in [Mas88, Ber95, CS98]. However, we did not study this topic in more detail, since our final algorithm for the computation of Galois groups will be based on heuristics anyway. We also notice that a “really good” random number generator should actually never generate points which satisfy non-trivial algebraic relations.

4.5.Computing the closure of

A Lie algebra is said to be algebraic, if it is the Lie algebra of some algebraic group, i.e. if e is an algebraic subset of GLn(𝔽). It is classical [Bor91, Corollary 7.7] that the smallest Lie algebra generated by a finite number of algebraic Lie algebras is again algebraic. The Lie algebras we will consider in our algorithms will always assumed to be algebraic. Given a finite number 1,,l of algebraic Lie algebras and a basis for 1++l, it is easy to enrich so that =Vect() is a Lie algebra: as long as [𝒃1,𝒃2] for two elements 𝒃1,𝒃2, we add [𝒃1,𝒃2] to . By what precedes, the computed Lie algebra is again algebraic.

Putting together the ingredients from the previous sections, we now have the following algorithm for computing the smallest closed algebraic group which contains .

Algorithm Closure()

Input: A subset ={M1,,Mm} of GLn(𝔽)

Output: a numeric approximation of

Step 1. [Initialize algorithm]

Compute Mi=iei for each i{1,,m}

Let 1m (notice that 1)

Let Lie(1++m)

Step 2. [Closure]

While there exists an N{1} with NN1 set Lie(+NN1)

While there exists an N{1} with Ne set {N}

While there exists N2 with Ne do

Compute N='e'

If ' then set Lie(+'), quit loop and repeat step 2

Otherwise, set {N}

Return e

The termination of this algorithm relies on a lemma, whose proof was kindly communicated to the author by J.-Y. Hée.

Lemma 31. Let be a closed algebraic subgroup of GLn() and let M1,,MmGLn() be a finite number of matrices in the normalizer of . Denote by 𝒢 the group generated by and M1,,Mm. If all elements in 𝒢/ have finite order, then 𝒢/ is finite.

Proof. In the case when ={1}, the result is classical [Dix71, Theorem 9.2]. In the general case, the normalizer 𝒩 of is a closed algebraic subgroup of GLn() and is a normal subgroup of 𝒩. By [Bor91, Theorem 6.8 and Proposition 1.10], it follows that 𝒩/𝒢 is an affine algebraic group which is isomorphic to a closed algebraic matrix group. This reduces the general case to the special case when ={1}.

Theorem 32. There exists an ε0>2 such that, for every ε>2 with ε<ε0, the set e returned by Closure, considered as a subset of GLn(eff), coincides with the smallest closed algebraic subgroup of GLn(eff) which contains .

Proof. Clearly, the dimension of increases throughout the execution of the algorithm, so it remains ultimately constant. At this point, the set will keep growing and the lemma implies that ultimately stabilizes. When this happens, is closed under multiplication modulo e, as well as under multiplicative inverses, since each element in has finite order modulo e. We conclude that e is indeed the smallest closed algebraic subgroup of GLn(𝔽) which contains , provided that the approximate zero-test always returns the right result.

In order to prove the correctness at a sufficient precision, we assume that we use the theoretic membership test from section 4.4 and that the random number generator successively generates the same random numbers each time we relaunch the algorithm at a higher precision. Now consider the trace of the execution of our algorithm when using an infinite precision. Let ε0 be a sufficient precision such that all zero-tests in this execution tree are still correct when we replace the infinite precision by a precision ε<ε0. Then the trace of the execution any finite precision ε<ε0 coincides with the trace of the execution at infinite precision. This completes the proof.

Remark 33. The main improvement of the algorithm Closure w.r.t. the algorithm from [DJK03] lies in the more efficient treatment of the connected component (using linear algebra). On the other hand, the mere enumeration of representatives in each connected component can be very unefficient (although a Gröbner basis might be of the same size). Fortunately, we will see in the next sections how to remove this drawback.

Assume now that is the set of generators for 𝒢L,𝒉z0 as computed in theorem 20. Assume that we have computed a reasonable candidate e for , expressed in the original basis corresponding to 𝒉z0. We still have to reconstruct GLn(𝕂) and =Vect() with Matn(𝕂) such that eGLn(𝕂)=eGLn(𝕂).

In the case of , by selecting a suitable basis of Matn(𝔽), we may consider as a big d×n2 matrix whose first d columns are linearly independent. We compute the row-echelon form of this basis:

E=( 1 1 ).

The entries of E must be in 𝕂: provided that is indeed generated by a basis of matrices with entries in 𝕂, the row-echelon form of this second basis coincides with E. It therefore suffices to reconstruct the entries of E using the LLL-algorithm.

In the case of a matrix M, the set Me is an algebraic variety of dimension d over 𝕂. Now choose MMe close to M in such a way that d independent coordinates of M are all in 𝕂. Then the other coordinates of M, considered as elements of eff, are easily found using Newton's method. Since Me is an algebraic variety, these other coordinates are actually in 𝕂, and we reconstruct them using the LLL-algorithm.

4.6.Fast computations with the connected components

The algorithm Closure from the previous section is quite inefficient when the set becomes large. It is therefore useful to seek for a better computational representation of . For finite groups 𝒢, one classical idea is to search for a sequence of subgroups

1=𝒢0⊊︀𝒢1⊊︀⊊︀𝒢k=𝒢 (9)

such that the indices 𝒢i:𝒢i-1 are small. Then we may represent elements in by sequences (a1,,ak) with ai𝒢i/𝒢i-1 for each i. This representation is particularly useful if operates on a set 𝒮 and if there exists points a1,,ak in 𝒮 such that

𝒢i=Sa1,,ak-i

is the stabilizer of the set {a1,,ak-i} for each i. Then the set Sa1,,ai-1/Sa1,,ai corresponds to the orbit of ai while leaving a1,,ai-1 fixed [Sim70, Sim71].

In the case of matrix groups, one often takes 𝔽n for 𝒮 [MO95]. However, this approach only yields interesting results when there exist non-trivial invariant subspaces under the action of the group, which will usually not be the case for us (otherwise we may factor L and consider smaller problems). A theoretical way out of this is to also consider the action of on exterior powers p𝔽n. However, this approach is very expensive from a computational point of view. In our more specific context of matrices with complex entries, we will therefore combine two other approaches: non-commutative lattice reduction and the operation of on Matn(𝔽)/e via conjugations MMNM-1.

The algebra Matn(𝔽) admits a natural (multiplicative) norm, given by

M=sup{|MV|:V𝔽n,|V|=1},

where || stands for the Euclidean norm on 𝔽n. If 𝒢=/e is finite, this enables us to construct 𝒢0=1,𝒢1,,𝒢k as in (9) as follows. Assuming that 𝒢0,,𝒢i-1 have been constructed, we consider a matrix Mi𝒢i-1e for which Mi-1 is minimal, and let 𝒢i be the set generated by Mie and 𝒢i-1 in 𝒢. This construction allows us to rapidly identify a big commutative part of 𝒢. More precisely, we have

Proposition 34. Let A,BGLn(𝔽) be such that ε=A-1<1 and δ=B-1<1. Then we have

ABA1B1-1B(δ,ε)=2ε21-ε+2δ21-δ+4εδ(1-ε)(1-δ).

Proof. Writing A=1+Δ and B=1+Ε, we expand A1=1-Δ+Δ2+ and B1=1-Ε+Ε2+ in ABA-1B1. This yields a non-commutative power series in Δ and Ε whose terms in 1,Δ and Ε vanish. It follows that

ABA1B1-1(1+δ)(1+ε)11-δ11-ε-1-2δ-2ε=B(δ,ε).

The proposition implies that ABA1B1-1<min(ε,δ) whenever max(ε,δ)<5-24. Now take A=Mi and B=Mj with i<j, where the Mi are as above. Then it follows that A and B commute whenever B(δ,ε)<M1-1. What is more, proposition 34 shows that taking commutators is a convenient way to construct matrices close to identity from a set of non-commutative generators.

However, from the effective point of view, we will not compute the exact computation of minimal representatives Mi in cosets of algebraic groups in detail. We will rather simulate such a computation in a way which is sufficient for our purpose. If M is such that M-1 is small, then we will also try to use the fact that the centralizer CM of M is often a big subgroup of /e, so the orbit of NN-1MN is small.

4.7.Non-commutative lattice reduction

Let 𝒢 be a closed algebraic subgroup of Matn(𝔽) with associated Lie-algebra . In this section, we will show how to compute efficiently with elements of the finite group =𝒢/e. Until the very end of this section, we assume that 𝒢 is included in the connected component of the normalizer of e in Matn(𝔽). We denote by 𝒩 the Lie algebra of this connected component. By [Hum81, Theorem 13.3], we have 𝒩={NMatn(𝔽):[N,]}.

Orthogonal projection
Let M𝒢 and recall that M belongs to the normalizer of e. If M-1<1, then X=log(1+(M-1)) also belongs to the normalizer of e. Since Me𝔽X lies in the connected component of this normalizer, we have X𝒩. Now consider the orthogonal supplement of for the Hermitian product on Matn(𝔽). We define π(M)=eY, where Y is the orthogonal projection of X on . From [X,], it follows that π(M)Me, and we denote M=π(M). Since e𝒩 is connected, the function MlogM may actually be analytically continued to a multivalued function e𝒩𝒩. After choosing branch cuts (the way this is done is not crucial for what follows, provided that we make the standard choice for M with M-1<1), this allows us to extend the definitions of π and to the case when M-11.

Representation of the elements in
Let X𝒩 and M=eX be such that

Let p1pl be the prime-decomposition of the order of M modulo e, with p1pl. Let A0=X and Ai=Mp1pi for all i{1,,l}. Let i be the subgroup of of elements which commute with Ai modulo e, so that 0l=. For i{1,,r}, we represent elements in the quotient i/i-1 by elements in the orbit of the action ΦAi:NAiNAi-1 modulo i-1. Since [X,], the set '=𝔽X is a Lie algebra whose normalizer contains 0. Consequently, 0M×0/e', and we represent elements in 0 by products MkP, with k and Pe'0/e'. The elements in 0/e' are represented in a similar way as the elements in , using recursion. The successive matrices M for 𝒢/e, 0/e', etc. will be called a basis for . A basis (B1,,Bm) is said to be sorted if B1Bm.

Adding new elements to a basis
Let (B1,,Bm) be a sorted basis for =𝒢/e and assume that we want to compute the extension 𝒢^=𝒢,N of 𝒢 by a new matrix N. Whenever we hit an element M^e^=𝒢^/e with M^<B1 during our computations, then we start the process of basis reduction, which is described below. Whenever we find an element in ^, then we abort all computations and return this element (indeed, in that case, we may continue with the closure of the connected component in Closure).

Let M=B1, X, etc. be as above. We start by computing the orbit of 𝒢^ modulo r-1 for ΦAr. Whenever we hit an element P1 (modulo e) with B1PB11P1<B1 or P<B1, then we start the process of basis reduction. Otherwise, we obtain a finite orbit, together with a finite number of matrices by which we have to extend r-1. We keep doing this using the same method for r-1 until 1.

At the end, we still have to show how to extend 0 with a new matrix N. Now recursive application of the algorithm to =0/e' and N yields a sorted basis B1,,Bm. When keeping track of the corresponding powers of eX during the computations, we also obtain a finite system of generators for 𝒢^e𝔽X. Using g.c.d. computations we either obtain a minimal generator B^1 or a new element in the connected component. In the first case, we return (B^1,B2,,Bm) if B^1<B2 and apply basis reduction otherwise.

Basis reduction
Let B1,,Bm be in 𝒢 with B1Bm. We call (B1,,Bm) a raw basis. In the above algorithms, raw bases occur when we are given an ordered basis (B2,,Bm) for 𝒢, and we find a new element B1 with B1<B2.

Using the above base extension procedure, we may transform a raw basis (B1,,Bm) into a basis for 𝒢: starting with (B1), we successively add B2,,Bm. However, it is more efficient to reduce (B1,,Bm) first. More precisely, let us now describe a procedure which tries to replace (B1,,Bm) by a better raw basis (B1,,Bm), with B1,,Bm=B1,,Bm, and whose elements are closer to identity. Of course, we may always return the original basis if a better one could not be found.

We first test whether all basis elements are roots of unity modulo . If not, then we found a new element in the connected component. We next test whether there exist i,j with BiBjBi1Bj1<B1, in which case we keep adding the smallest such commutator to the basis. Whenever this stops, we write B1=eX1,,Bm=eXm with X1,,Xm and consider all lattice reductions XiXi+kXj(k) proposed by the LLL-algorithm in the commutative vector space . Whenever 0<BiBjk<Bi, for one such reduction, then we perform the corresponding reduction BiBiBjk on our basis and keep repeating the basis reduction process.

The general case
We still have to show how to deal with the case when 𝒢 is not included in the connected component e𝒩 of the normalizer of e in Matn(𝔽). In that case, we start with the computation of a basis for 𝒩, using linear algebra. Since e𝒩𝒢 is a normal subgroup of 𝒢, we have 𝒢(𝒢/e𝒩)(e𝒩𝒢). Now we have explained above how to compute with elements in e𝒩𝒢. If 𝒩⊋︀, then may use recursion for computations in the finite group 𝒢/e𝒩. If 𝒩=, then elements in 𝒢/e𝒩 have necessarily small order, so we simply list the elements of 𝒢/e.

5.Conclusion and final notes

We hope that we provided convincing evidence that analytic methods may be used for the efficient computation of differential Galois groups and related problems like the factorization of linear differential operators.

The two main remaining challenges are the concrete implementation of the algorithms presented here (as part of a more general library for the computation with analytic functions such as [vdHea05]) and the development of a priori or a posteriori methods for ensuring the correctness of the computed result. Some ideas into this direction are as follows:

Besides the above ideas for improving the algorithms, this paper also raises a few other interesting questions:

Of course, a better mastering of the algorithms in this paper may also lead to more efficient algorithms for other computations which rely on differential Galois theory, like the computation of Liouvillian or other forms of solutions. More generally, our algorithms may be used for other computations with algebraic matrix groups over and other fields of characteristic 0. We also expect all results to generalize to holonomic systems of linear partial differential equations.

Acknowledgment
The author would like to thank J.-Y. Hée, W. de Graaf, F. Zara, J. Écalle and the referees for several helpful comments and references.

Bibliography

[BB85] D. Bertrand and F. Beukers. équations différentielles linéaires et majorations de multiplicités. Ann. Sci. de l'École Norm. Sup., 28(4-5):473–494, 1985.

[Bek94] E. Beke. Die Irreduzibilität der homogenen linearen Differentialgleichungen. Math. Ann., 45, 1894.

[Ber95] D. Bertrand. Minimal heights and polarizations on group varieties. Duke Math. J., 80(1):223–250, 1995.

[Bor91] A. Borel. Linear algebraic groups. Springer-Verlag, 2nd edition, 1991.

[Bra91] B.L.J. Braaksma. Multisummability and Stokes multipliers of linear meromorphic differential equations. J. Diff. Eq, 92:45–75, 1991.

[Bra92] B.L.J. Braaksma. Multisummability of formal power series solutions to nonlinear meromorphic differential equations. Ann. Inst. Fourier de Grenoble, 42:517–540, 1992.

[CC90] D.V. Chudnovsky and G.V. Chudnovsky. Computer algebra in the service of mathematical physics and number theory (computers in mathematics, stanford, ca, 1986). In Lect. Notes in Pure and Applied Math., volume 125, pages 109–232, New-York, 1990. Dekker.

[Clu04] T. Cluzeau. Algorithmique modulaire des équations différentielles linéaires. PhD thesis, Université de Limoges, 2004.

[CNP93] B. Candelberger, J.C. Nosmas, and F. Pham. Approche de la résurgence. Actualités mathématiques. Hermann, 1993.

[CS98] É. Compoint and M. Singer. Computing Galois groups of completely reducible differential equations. JSC, 11:1–22, 1998.

[Der99] H. Derksen. Computation of reductive group invariants. Adv. in Math., 141:366–384, 1999.

[dG00] W. de Graaf. Lie Algebras: Theory and Algorithms, volume 56 of North Holland Mathematical Library. Elsevier science, 2000.

[Dix71] J.D. Dixon. The structure of linear groups. Van Nostrand Reinhold Company, 1971.

[DJK03] H. Derksen, E. Jaendel, and P. Koiran. Quantum automata and algebraic groups. Technical Report 2003-39, LIP, École Norm. Sup. de Lyon, 2003. Presented at MEGA 2003; to appear in JSC.

[É85] J. Écalle. Les fonctions résurgentes I–III. Publ. Math. d'Orsay 1981 and 1985, 1985.

[É87] J. Écalle. L'accélération des fonctions résurgentes (survol). Unpublished manuscript, 1987.

[É92] J. Écalle. Introduction aux fonctions analysables et preuve constructive de la conjecture de Dulac. Hermann, collection: Actualités mathématiques, 1992.

[É93] J. Écalle. Six lectures on transseries, analysable functions and the constructive proof of Dulac's conjecture. In D. Schlomiuk, editor, Bifurcations and periodic orbits of vector fields, pages 75–184. Kluwer, 1993.

[Fab85] E. Fabry. Sur les intégrales des équations différentielles linéaires à coefficients rationnels. PhD thesis, Paris, 1885.

[Hum81] J.E. Humphreys. Linear algebraic groups. Graduate Texts in Math. Springer, 1981.

[Kap57] I. Kaplansky. An introduction to differential algebra. Hermann, 1957.

[Kol73] E.R. Kolchin. Differential algebra and algebraic groups. Academic Press, New York, 1973.

[LLL82] A.K. Lenstra, H.W. Lenstra, and L. Lovász. Factoring polynomials with rational coefficients. Math. Ann., 261:515–534, 1982.

[Mas88] D. Masser. Linear relations on algebraic groups. In A. Baker, editor, New advances in transendence theory, pages 248–262. Cambridge Univ. Press, 1988.

[Men96] F. Menous. Les bonnes moyennes uniformisantes et leur application à la resommation réelle. PhD thesis, Université Paris-Sud, France, 1996.

[Mit96] C. Mitschi. Differential Galois groups of confluent generalized hypergeometric equations: an approach using stokes multipliers. Pac. J. Math., 176(2):365–405, 1996.

[MO95] Scott H. Murray and E. A. O'Brien. Selecting base points for the Schreier-Sims algorithm for matrix groups. J.S.C., 18:577–584, 1995.

[Moe73] R. Moenck. Fast computation of gcds. In Proc. of the 5th ACM Annual Symposium on Theory of Computing, pages 142–171, New York, 1973. ACM Press.

[MR91] J. Martinet and J.-P. Ramis. Elementary acceleration and multisummability. Ann. Inst. Henri Poincaré, 54(4):331–401, 1991.

[PW02] Victor Y. Pan and Xinmao Wang. Acceleration of euclidean algorithm and extensions. In Teo Mora, editor, Proc. ISSAC '02, pages 207–213, Lille, France, July 2002.

[Ram85] J.-P. Ramis. Phénomène de Stokes et filtration Gevrey sur le groupe de Picard-Vessiot. Notes aux CRAS, 301(I/5):165–167, 1985.

[Rit50] J.F. Ritt. Differential algebra. Amer. Math. Soc., New York, 1950.

[Sch95] L. Schlesinger. Handbuch der Theorie der linearen Differentialgleichungen, volume I. Teubner, Leipzig, 1895.

[Sch97] L. Schlesinger. Handbuch der Theorie der linearen Differentialgleichungen, volume II. Teubner, Leipzig, 1897.

[Sim70] C.C. Sims. Computational problems in abstract algebra, chapter Computational methods in the study of permutation groups, pages 169–183. Pergamon press, Oxford, 1970.

[Sim71] C.C. Sims. Determining the conjugacy classes of permutation groups. In G. Birkhoff and M. Hall Jr., editors, Computers in algebra and number theory, volume 4 of Proc. A.M.S., pages 191–195, New York, 1971. A.M.S.

[SU93] M. Singer and F. Ulmer. Galois groups of second and third order linear differential equations. J.S.C., 16:9–36, 1993.

[vdH99] J. van der Hoeven. Fast evaluation of holonomic functions. TCS, 210:199–215, 1999.

[vdH01a] J. van der Hoeven. Complex transseries solutions to algebraic differential equations. Technical Report 2001-34, Univ. d'Orsay, 2001.

[vdH01b] J. van der Hoeven. Fast evaluation of holonomic functions near and in singularities. JSC, 31:717–743, 2001.

[vdH02] J. van der Hoeven. Relax, but don't be too lazy. JSC, 34:479–542, 2002.

[vdH03] J. van der Hoeven. Majorants for formal power series. Technical Report 2003-15, Université Paris-Sud, Orsay, France, 2003.

[vdH04] J. van der Hoeven. Computations with effective real numbers. Technical Report 2004-02, Université Paris-Sud, Orsay, France, 2004. To appear in TCS.

[vdH05a] J. van der Hoeven. Effective complex analysis. J.S.C., 39:433–449, 2005.

[vdH05b] J. van der Hoeven. Efficient accelero-summation of holonomic functions. Technical Report 2005-54, Université Paris-Sud, Orsay, France, 2005. Submitted to JSC.

[vdHea05] J. van der Hoeven et al. Mmxlib: the standard library for Mathemagix, 2002–2005. http://www.mathemagix.org/mmxweb/web/welcome-mml.en.html.

[vdP95] M. van der Put. Differential equations modulo p. Compositio Mathematica, 97:227–251, 1995.

[vdPS03] M. van der Put and M. Singer. Galois Theory of Linear Differential Equations, volume 328 of Grundlehren der mathematischen Wissenschaften. Springer, 2003.

[vH96] M. van Hoeij. Factorization of linear differential operators. PhD thesis, Univ. of Nijmegen, The Netherlands, 1996.

[vH97] M. van Hoeij. Factorization of differential operators with rational functions coefficients. J.S.C., 24:537–561, 1997.

[vHW97] M. van Hoeij and J.A. Weil. An algorithm for computing invariants of differential Galois groups. J. Pure Appl. Algebra, 117–118:353–379, 1997.