| 
 
 
 
 | 
           
                 
                  Let  
                  In this paper, we present a numeric-symbolic algorithm for
                  the computation of the closed algebraic subgroup generated
                  by a finite number of invertible matrices. Using the above
                  results, this yields an algorithm for the computation of
                  differential Galois groups, when computing with a sufficient
                  precision.
                 
                  Even though there is no straightforward way to find a
                  “sufficient precision” for guaranteeing the
                  correctness of the end-result, it is often possible to check
                  a posteriori whether the end-result is correct. In
                  particular, we present a non-heuristic algorithm for the
                  factorization of linear differential operators.
                
            
        
               
           be a linear differential
                  operator, where
 be a linear differential
                  operator, where  is an effective
                  algebraically closed subfield of
 is an effective
                  algebraically closed subfield of  . It
                  can be shown that the differential Galois group of
. It
                  can be shown that the differential Galois group of  is generated (as a closed algebraic group) by
                  a finite number of monodromy matrices, Stokes matrices and
                  matrices in local exponential groups. Moreover, there exist
                  fast algorithms for the approximation of the entries of
                  these matrices.
 is generated (as a closed algebraic group) by
                  a finite number of monodromy matrices, Stokes matrices and
                  matrices in local exponential groups. Moreover, there exist
                  fast algorithms for the approximation of the entries of
                  these matrices.
                
      Let  be a monic linear differential operator of
      order
 be a monic linear differential operator of
      order  , where
, where  is an
      effective algebraically closed subfield of
 is an
      effective algebraically closed subfield of  . A
      holonomic function is a solution to the equation
. A
      holonomic function is a solution to the equation  .
      The differential Galois group
.
      The differential Galois group  of
 of  is a linear algebraic group which acts on the space
 is a linear algebraic group which acts on the space  of solutions (see section 2.2 and [Kap57, vdPS03, Kol73]). It carries a
      lot of information about the solutions in
 of solutions (see section 2.2 and [Kap57, vdPS03, Kol73]). It carries a
      lot of information about the solutions in  and on
      the relations between different solutions. For instance, the existence
      of non-trivial factorizations of
 and on
      the relations between different solutions. For instance, the existence
      of non-trivial factorizations of  and the
      existence of Liouvillian solutions can be read off from the Galois
      group. This makes it an interesting problem to explicitly compute the
      Galois group of
 and the
      existence of Liouvillian solutions can be read off from the Galois
      group. This makes it an interesting problem to explicitly compute the
      Galois group of  .
.
    
      A classical approach in this area is to let  act
      on other vector spaces obtained from
 act
      on other vector spaces obtained from  by the
      constructions from linear algebra, such as symmetric powers
 by the
      constructions from linear algebra, such as symmetric powers  and exterior powers
 and exterior powers  [Bek94,
      SU93]. For a suitable such space
 [Bek94,
      SU93]. For a suitable such space  ,
      the Galois group
,
      the Galois group  consists precisely of those
      invertible
 consists precisely of those
      invertible  matrices which leave a certain
      one-dimensional subspace of
 matrices which leave a certain
      one-dimensional subspace of  invariant [Hum81,
      chapter 11]. Invariants in
 invariant [Hum81,
      chapter 11]. Invariants in  or
 or  under
      under  may be computed more efficiently by
      considering the local solutions of
 may be computed more efficiently by
      considering the local solutions of  at
      singularities [vHW97, vH97, vH96].
      More recently, and assuming (for instance) that the coefficients of
 at
      singularities [vHW97, vH97, vH96].
      More recently, and assuming (for instance) that the coefficients of  are actually in
 are actually in  , alternative
      algorithms appeared which are based on the reduction of the equation
, alternative
      algorithms appeared which are based on the reduction of the equation
       modulo a prime number
 modulo a prime number  [Clu04, vdP95, vdPS03].
      [Clu04, vdP95, vdPS03].
    
      In this paper, we will study another type of “analytic
      modular” algorithms, by studying the operator  in greater detail near its singularities using the theory of
      accelero-summation [É85, É87, É92, É93, Bra91, Bra92]. More precisely, we will use the following facts:
      in greater detail near its singularities using the theory of
      accelero-summation [É85, É87, É92, É93, Bra91, Bra92]. More precisely, we will use the following facts:
    
 is generated
        (as a closed algebraic group) by a finite number of monodromy
        matrices, Stokes matrices and matrices in so called local exponential
        groups.
 is generated
        (as a closed algebraic group) by a finite number of monodromy
        matrices, Stokes matrices and matrices in so called local exponential
        groups.
       is the algebraic closure of
 is the algebraic closure of  , then
, then  -digit approximations
        can be computed in time
-digit approximations
        can be computed in time  .
.
      
      When using these facts for the computation of differential Galois
      groups, the bulk of the computations is reduced to linear algebra in
      dimension  with multiple precision coefficients.
 with multiple precision coefficients.
    
      In comparison with previous methods, this approach is expected to be
      much faster than algorithms which rely on the use of exterior powers. A
      detailed comparison with arithmetic modular methods would be
      interesting. One advantage of arithmetic methods is that they are easier
      to implement in existing systems. On the other hand, our analytic
      approach relies on linear algebra in dimension  (with floating coefficients), whereas modulo
      (with floating coefficients), whereas modulo  methods rely on linear algebra in dimension
      methods rely on linear algebra in dimension  (with coefficients modulo
      (with coefficients modulo  ), so the first
      approach might be a bit faster. Another advantage of the analytic
      approach is that it is more easily adapted to coefficients fields
), so the first
      approach might be a bit faster. Another advantage of the analytic
      approach is that it is more easily adapted to coefficients fields  with transcendental constants.
 with transcendental constants.
    
Let us outline the structure of this paper. In section 2, we start by recalling some standard terminology and we shortly review the theorems on which our algorithms rely. We start with a survey of differential Galois theory, monodromy and local exponential groups. We next recall some basic definitions and theorems from the theory of accelero-summation and the link with Stokes matrices and differential Galois groups. We finally recall some theorems about the effective approximation of the transcendental numbers involved in the whole process.
      Before coming to the computation of differential Galois groups, we first
      consider the simpler problem of factoring  in
      section 3. We recall that there exists a non-trivial
      factorization of
 in
      section 3. We recall that there exists a non-trivial
      factorization of  if and only if the Galois group
      of
 if and only if the Galois group
      of  admits a non-trivial invariant subspace. By
      using computations with limited precision, we show how to use this
      criterion in order to compute candidate factorizations or a proof that
      there exist no factorizations. It is easy to check a posteriori
      whether a candidate factorization is correct, so we obtain a
      factorization algorithm by increasing the precision until we obtain a
      correct candidate or a proof that there are no factorizations.
 admits a non-trivial invariant subspace. By
      using computations with limited precision, we show how to use this
      criterion in order to compute candidate factorizations or a proof that
      there exist no factorizations. It is easy to check a posteriori
      whether a candidate factorization is correct, so we obtain a
      factorization algorithm by increasing the precision until we obtain a
      correct candidate or a proof that there are no factorizations.
    
      In section 4 we consider the problem of computing the
      differential Galois group of  . Using the results
      from section 2, it suffices to show how to compute the
      algebraic closure of a matrix group
. Using the results
      from section 2, it suffices to show how to compute the
      algebraic closure of a matrix group  generated by
      a finite number of given elements. A theoretical solution for this
      problem based on Gröbner basis techniques has been given in [DJK03]. The main idea behind the present algorithm is similar,
      but more emphasis is put on efficiency (in contrast to generality).
 generated by
      a finite number of given elements. A theoretical solution for this
      problem based on Gröbner basis techniques has been given in [DJK03]. The main idea behind the present algorithm is similar,
      but more emphasis is put on efficiency (in contrast to generality).
    
      First of all, in our context of complex numbers with arbitrary
      precisions, we may use the LLL-algorithm for the computation of linear
      and multiplicative dependencies [LLL82]. Secondly, the
      connected component of  is represented as the
      exponential of a Lie algebra
 is represented as the
      exponential of a Lie algebra  given by a basis.
      Computations with such Lie algebras essentially boil down to linear
      algebra. Finally, we use classical techniques for finite groups in order
      to represent and compute with the elements in
 given by a basis.
      Computations with such Lie algebras essentially boil down to linear
      algebra. Finally, we use classical techniques for finite groups in order
      to represent and compute with the elements in  [Sim70, Sim71, MO95]. Moreover,
      we will present an algorithm for non-commutative lattice reduction,
      similar to the LLL-algorithm, for the efficient computation with
      elements in
      [Sim70, Sim71, MO95]. Moreover,
      we will present an algorithm for non-commutative lattice reduction,
      similar to the LLL-algorithm, for the efficient computation with
      elements in  near the identity.
 near the identity.
    
      The algorithms in section 4 are all done using a fixed
      precision. Although we do prove that we really compute the Galois group
      when using a sufficiently large precision, it is not clear a
      priori how to find such a “sufficient precision”.
      Nevertheless, we have already seen in section 3 that it is
      often possible to check the correctness of the result a
      posteriori, especially when we are not interested in the Galois
      group  itself, but only in some information
      provided by
 itself, but only in some information
      provided by  . Also, it might be possible to
      reduce the amount of dependence on “transcendental
      arguments” in the algorithm modulo a further development of our
      ideas. Some hints are given in the last section.
. Also, it might be possible to
      reduce the amount of dependence on “transcendental
      arguments” in the algorithm modulo a further development of our
      ideas. Some hints are given in the last section.
    
      
Throughout this paper, we will use the following notations:
 
       
       matrices with coefficients in
 matrices with coefficients in
         .
.
       
       of invertible matrices.
 of invertible matrices.
       
       of a
        larger vector space.
 of a
        larger vector space.
       
       of a larger
        algebra.
 of a larger
        algebra.
      
      Vectors are typeset in bold face  and we use the
      following vector notations:
 and we use the
      following vector notations:
    
 
    
      Matrices  will also be used as mappings
 will also be used as mappings  . When making a base change in
. When making a base change in  , we
      understand that we perform the corresponding transformations
, we
      understand that we perform the corresponding transformations  on all matrices under consideration. We denote
 on all matrices under consideration. We denote  for the diagonal matrix with entries
 for the diagonal matrix with entries  .
      The
.
      The  may either be scalars or square matrices.
      Given a matrix
 may either be scalars or square matrices.
      Given a matrix  and a vector
 and a vector  ,
      we write
,
      we write  for the vector
 for the vector  with
      with  for all
 for all  .
.
    
      Consider a monic linear differential operator  ,
      where
,
      where  is an algebraically closed subfield of
 is an algebraically closed subfield of
       and
 and  . We will denote by
. We will denote by
       the finite set of singularities of
 the finite set of singularities of  (in the case of
 (in the case of  , one considers the
      transformation
, one considers the
      transformation  ). A Picard-Vessiot
      extension of
). A Picard-Vessiot
      extension of  is a differential field
 is a differential field  such that
 such that
    
 is differentially generated by
 is differentially generated by  and a basis of solutions
 and a basis of solutions  to the
        equation
 to the
        equation  .
.
       has
 has  as its field of
        constants.
 as its field of
        constants.
      
      A Picard-Vessiot extension always exists: given a point  and
      and  , let
, let  be the unique
      solution to
 be the unique
      solution to  with
 with  for
 for
       . We call
. We call  the
      canonical basis for the solution space of
 the
      canonical basis for the solution space of  at
      at  , and regard
, and regard  as a
      column vector. Taking
 as a
      column vector. Taking  , the condition
      PV2 is trivially satisfied since
, the condition
      PV2 is trivially satisfied since  and the constant field of
      and the constant field of  is
 is  .
.
    
      Let  be a Picard-Vessiot extension of
 be a Picard-Vessiot extension of  and let
 and let  be as in
      PV1. The differential Galois group
 be as in
      PV1. The differential Galois group  of the extension
 of the extension  is the group of
      differential automorphisms which leave
 is the group of
      differential automorphisms which leave  pointwise
      invariant. It is classical [Kol73] that
 pointwise
      invariant. It is classical [Kol73] that  is independent (up to isomorphism) of the particular choice of the
      Picard-Vessiot extension
      is independent (up to isomorphism) of the particular choice of the
      Picard-Vessiot extension  .
.
    
      Given an automorphism  , any solution
, any solution  to
 to  is sent to another solution. In
      particular, there exists a unique matrix
 is sent to another solution. In
      particular, there exists a unique matrix  with
 with
       for all
 for all  . This yields an
      embedding
. This yields an
      embedding  of
 of  into
 into  and we define
 and we define  . Conversely,
. Conversely,
       belongs to
 belongs to  if every
      differential relation
 if every
      differential relation  satisfied by
 satisfied by  is also satisfied by
 is also satisfied by  (with
 (with  ). Since this assumption constitutes an infinite
      number of algebraic conditions on the coefficients of
). Since this assumption constitutes an infinite
      number of algebraic conditions on the coefficients of  ,
      it follows that
,
      it follows that  is a Zariski closed algebraic
      matrix group. Whenever
 is a Zariski closed algebraic
      matrix group. Whenever  is another basis, we
      obtain the same matrix group
 is another basis, we
      obtain the same matrix group  up to conjugation.
 up to conjugation.
    
      Assume now that  is a larger algebraically closed
      subfield of
 is a larger algebraically closed
      subfield of  . Then the field
. Then the field  is again a Picard-Vessiot extension of
      is again a Picard-Vessiot extension of  .
      Furthermore, the Ritt-Raudenbush theorem [Rit50] implies
      that the perfect differential ideal of all
.
      Furthermore, the Ritt-Raudenbush theorem [Rit50] implies
      that the perfect differential ideal of all  with
 with
       is finitely generated, say by
 is finitely generated, say by  .
      But then
.
      But then  is still a finite system of generators
      of the perfect differential ideal of all
 is still a finite system of generators
      of the perfect differential ideal of all  with
 with
       . Consequently,
. Consequently,  (i.e. as an algebraic group over
      (i.e. as an algebraic group over  )
      is determined by the same algebraic equations as
)
      is determined by the same algebraic equations as  .
      We conclude that
.
      We conclude that  .
.
    
      Let  be a Picard-Vessiot extension of
 be a Picard-Vessiot extension of  . Any differential field
. Any differential field  with
 with  naturally induces an algebraic subgroup
 naturally induces an algebraic subgroup  of automorphisms of
 of automorphisms of  which leave
 which leave
       fixed. Inversely, any algebraic subgroup
 fixed. Inversely, any algebraic subgroup  of
 of  gives rise to the
      differential field
 gives rise to the
      differential field  with
 with  of all elements which are invariant under the action of
      of all elements which are invariant under the action of  .
      We say that
.
      We say that  (resp.
 (resp.  )
      is closed if
)
      is closed if  (resp.
 (resp.  ). In that case, the extension
). In that case, the extension  is said to be normal, i.e. every element in
      is said to be normal, i.e. every element in  is moved by an automorphism of
 is moved by an automorphism of  over
      over  . The main theorem from differential Galois
      theory states that the Galois correspondences are bijective [Kap57,
      Theorem 5.9].
. The main theorem from differential Galois
      theory states that the Galois correspondences are bijective [Kap57,
      Theorem 5.9].
    
        
        
          
 and
 and  are bijective.
            are bijective.
           is a closed normal subgroup of
 is a closed normal subgroup of
             if and only if the extension
 if and only if the extension  is normal. In that case,
 is normal. In that case,  .
.
          
       .
      If
.
      If  for all
 for all  , then
, then  .
.
    
      Consider a continuous path  on
 on  from
      from  to
 to  . Then analytic
      continuation of the canonical basis
. Then analytic
      continuation of the canonical basis  at
 at  along
 along  yields a basis of solutions
      to
 yields a basis of solutions
      to  at
 at  . The matrix
. The matrix  with
 with
    
|  | (1) | 
      is called the connection matrix or transition matrix
      along  . In particular, if
. In particular, if  ,
      then we call
,
      then we call  a monodromy matrix based
      in
 a monodromy matrix based
      in  . We clearly have
. We clearly have
    
 
    
      for the composition of paths, so the monodromy matrices based in  form a group
 form a group  which is called
      the monodromy group. Given a path
 which is called
      the monodromy group. Given a path  from
 from
       to
 to  , we notice that
, we notice that  . Since any differential relation satisfied by
. Since any differential relation satisfied by  is again satisfied by its analytic continuation along
 is again satisfied by its analytic continuation along
       , we have
, we have  and
 and  .
.
    
       and
 and  as row vectors, then (1) has to be
      transposed. The roles of
 as row vectors, then (1) has to be
      transposed. The roles of  and
 and  may also be interchanged modulo inversion of
      may also be interchanged modulo inversion of  .
.
    
      Now assume that  admits a singularity at
 admits a singularity at  (if
 (if  then we may reduce to
      this case modulo a translation; singularities at infinity may be brought
      back to zero using the transformation
 then we may reduce to
      this case modulo a translation; singularities at infinity may be brought
      back to zero using the transformation  ). It is
      well-known [Fab85, vH96] that
). It is
      well-known [Fab85, vH96] that  admits a computable formal basis of solutions of the form
      admits a computable formal basis of solutions of the form
    
|  | (2) | 
      with  ,
,  ,
,  and
      and  . We will denote by
. We will denote by  the set of finite sums of expressions of the form (2). We
      may see
      the set of finite sums of expressions of the form (2). We
      may see  as a differential subring of a formal
      differential field of “complex transseries”
 as a differential subring of a formal
      differential field of “complex transseries”  [vdH01a] with constant field
      [vdH01a] with constant field  .
.
    
      We recall that transseries in  are infinite
      linear combinations
 are infinite
      linear combinations  of
      “transmonomials” with “grid-based support”. The
      set
 of
      “transmonomials” with “grid-based support”. The
      set  of transmonomials forms a totally ordered
      vector space for exponentiation by reals and the asymptotic ordering
 of transmonomials forms a totally ordered
      vector space for exponentiation by reals and the asymptotic ordering
       . In particular, each non-zero transseries
. In particular, each non-zero transseries  admits a unique dominant monomial
 admits a unique dominant monomial  .
      It can be shown [vdH01a] that there exists a unique basis
.
      It can be shown [vdH01a] that there exists a unique basis
       of solutions to
 of solutions to  of the
      form (2), with
 of the
      form (2), with  and
 and  for all
      for all  . We call
. We call  the
      canonical basis of solutions in
 the
      canonical basis of solutions in  and there is an
      algorithm which computes
 and there is an
      algorithm which computes  as a function of
 as a function of  .
.
    
      Let  be the subset of
 be the subset of  of
      all finite sums of expressions of the form (2) with
 of
      all finite sums of expressions of the form (2) with  . Then any
. Then any  can uniquely be
      written as a finite sum
 can uniquely be
      written as a finite sum  , where
, where  .
      Let
.
      Let  be the group of all automorphisms
 be the group of all automorphisms  for which there exists a mapping
 for which there exists a mapping  with
      with  for all
 for all  . Then every
. Then every
       preserves differentiation and maps the
      Picard-Vessiot extension
 preserves differentiation and maps the
      Picard-Vessiot extension  of
 of  into itself. In particular, the restriction
      into itself. In particular, the restriction  of
 of
       to
 to  is a subset of
 is a subset of  .
.
    
       is fixed under
      is fixed under  . Then
. Then  .
.
    
 and
        let
 and
        let  be an exponential with
 be an exponential with  .
        Let
.
        Let  be a supplement of the
 be a supplement of the  -vector
        space
-vector
        space  . Let
. Let  be the
        mapping in
 be the
        mapping in  which sends
 which sends  to
        to  for each
 for each  and
 and  . Then we clearly have
. Then we clearly have  .
.
       
      
      Let  be the set of exponents corresponding to the
      exponents of the elements of the canonical basis
 be the set of exponents corresponding to the
      exponents of the elements of the canonical basis  .
      Using linear algebra, we may compute a multiplicatively independent set
.
      Using linear algebra, we may compute a multiplicatively independent set
       such that
 such that  for certain
 for certain
       and all
 and all  .
.
    
       is generated by
      the matrices
 is generated by
      the matrices  where
 where  is
      chosen arbitrarily.
 is
      chosen arbitrarily.
    
         be the group
        generated by the matrices
 be the group
        generated by the matrices  . Notice that each
        individual matrix
. Notice that each
        individual matrix  generates
 generates  :
        assuming
:
        assuming  , the variety
, the variety  is irreducible of dimension
        is irreducible of dimension  and
 and  is not contained in an algebraic group of dimension
 is not contained in an algebraic group of dimension  . Now any
. Now any  is a diagonal
        matrix
 is a diagonal
        matrix  for some multiplicative mapping
 for some multiplicative mapping  . Hence
. Hence
      
 
      Conversely, each element
 
      
 which
          may be further extended to
 which
          may be further extended to  using Zorn's
          lemma and the fact that
 using Zorn's
          lemma and the fact that  is algebraically
          closed. It follows that
 is algebraically
          closed. It follows that  .
.
         
        
      Assume that  and let
 and let  be
      the transformation which sends
 be
      the transformation which sends  to
 to  ,
,  to
 to  and
 and
       to
 to  . Then
. Then  preserves differentiation, so any solution to
 preserves differentiation, so any solution to  of the form (2) is sent to another solution
      of the same form. In particular, there exists a matrix
 of the form (2) is sent to another solution
      of the same form. In particular, there exists a matrix  with
      with  , called the formal monodromy
      matrix around
, called the formal monodromy
      matrix around  . We have
. We have  .
.
    
       is fixed under
 is fixed under  and
 and  . Then
. Then  .
.
    
         .
        Interpreting
.
        Interpreting  as a polynomial in
 as a polynomial in  with
 with  , we must have
, we must have  since
 since
      
 
      
        Consequently,  is of the form
 is of the form  and
        and
      
 
      
 for every
 for every  with
          with  , whence
, whence  .
.
         
        
      Let  be the differential
 be the differential  -algebra
      of infinitesimal Puiseux series in
-algebra
      of infinitesimal Puiseux series in  for
 for  and consider a formal power series solution
 and consider a formal power series solution  to
 to  . The process of
      accelero-summation enables to associate an analytic meaning
. The process of
      accelero-summation enables to associate an analytic meaning  to
 to  in a sector near the origin of
      the Riemann surface
 in a sector near the origin of
      the Riemann surface  of
 of  ,
      even in the case when
,
      even in the case when  is divergent.
      Schematically speaking, we obtain
 is divergent.
      Schematically speaking, we obtain  through a
      succession of transformations:
 through a
      succession of transformations:
    
|  | (3) | 
      Each  is a “resurgent function” which
      realizes
 is a “resurgent function” which
      realizes  in the “convolution model”
      with respect to the
 in the “convolution model”
      with respect to the  -th “critical
      time”
-th “critical
      time”  (with
 (with  and
 and
       ). In our case,
). In our case,  is an
      analytic function which admits only a finite number of singularities
      above
 is an
      analytic function which admits only a finite number of singularities
      above  . In general, the singularities of a
      resurgent function are usually located on a finitely generated grid. Let
      us describe the transformations
. In general, the singularities of a
      resurgent function are usually located on a finitely generated grid. Let
      us describe the transformations  ,
,  and
 and  in more detail.
 in more detail.
    
 . This transformation sends each
. This transformation sends each  to
 to
    
     
    
      where  , and extends by strong linearity:
, and extends by strong linearity:
    
 
    
      The result is a formal series  in
 in  which converges near the origin of
 which converges near the origin of  .
      The formal Borel transform is a morphism of differential algebras which
      sends multiplication to the convolution product, i.e.
.
      The formal Borel transform is a morphism of differential algebras which
      sends multiplication to the convolution product, i.e.  .
.
    
 , the function
, the function  is defined near the
      origin of
 is defined near the
      origin of  , can be analytically continued on the
      axis
, can be analytically continued on the
      axis  , and admits a growth of the form
, and admits a growth of the form  at infinity. The next function
 at infinity. The next function  is
      obtained from
 is
      obtained from  by an acceleration of the
      form
 by an acceleration of the
      form
    
     
    
      where the acceleration kernel  is given by
 is given by
    
 
    
      For large  on an axis with
 on an axis with  ,
      it can be shown that
,
      it can be shown that  for some constant
 for some constant  . Assuming that
. Assuming that  satisfies
 satisfies
    
|  | (5) | 
      it follows that the acceleration  of
 of  is well-defined for small
 is well-defined for small  on
 on  . The set
. The set  of directions
 of directions  such
 such  admits a singularity on
 admits a singularity on
       is called the set of Stokes directions.
      Accelerations are morphisms of differential
 is called the set of Stokes directions.
      Accelerations are morphisms of differential  -algebras
      which preserve the convolution product.
-algebras
      which preserve the convolution product.
    
 is defined near the origin of
 is defined near the origin of  , can be analytically continued on the axis
, can be analytically continued on the axis  and admits at most exponential growth at infinity. The
      function
 and admits at most exponential growth at infinity. The
      function  is now obtained using the analytic
      Laplace transform
 is now obtained using the analytic
      Laplace transform
    
     
    On an axis with
|  | (6) | 
      the function  is defined for all sufficiently
      small
 is defined for all sufficiently
      small  . The set
. The set  of Stokes
      directions is defined in a similar way as in the case of accelerations.
      The Laplace transform is a morphism of differential
 of Stokes
      directions is defined in a similar way as in the case of accelerations.
      The Laplace transform is a morphism of differential  -algebras
      which is inverse to the Borel transform and sends the convolution
      product to multiplication.
-algebras
      which is inverse to the Borel transform and sends the convolution
      product to multiplication.
    
       .
.
    
      Given critical times  in
 in  and directions
      and directions  satisfying (5), we
      say that a formal power series
 satisfying (5), we
      say that a formal power series  is
      accelero-summable in the multi-direction
 is
      accelero-summable in the multi-direction  if the above scheme yields an analytic function
      if the above scheme yields an analytic function  near the origin of any axis on
      near the origin of any axis on  satisfying (6). We denote the set of such power series by
 satisfying (6). We denote the set of such power series by  ,
      where
,
      where  . Inversely, given
. Inversely, given  ,
      we denote by
,
      we denote by  the set of all triples
 the set of all triples  such that
 such that  and so that
 and so that  is well-defined. In that case, we write
 is well-defined. In that case, we write  and
      and  .
.
    
      The set  forms a differential subring of
 forms a differential subring of  and the map
 and the map  for
 for  is injective. If
 is injective. If  and
 and  are obtained from
 are obtained from  and
 and  by inserting a new critical time and an arbitrary
      direction, then we have
 by inserting a new critical time and an arbitrary
      direction, then we have  . In particular,
. In particular,  contains
 contains  , where
, where  denotes the ring of convergent infinitesimal Puiseux
      series. Let
 denotes the ring of convergent infinitesimal Puiseux
      series. Let  be sets of directions such that each
 be sets of directions such that each
       is finite modulo
 is finite modulo  . Let
. Let
       be the subset of
 be the subset of  of
      multi-directions
 of
      multi-directions  which verify (5).
      We denote
 which verify (5).
      We denote  ,
,  and
 and  .
.
    
      Taking  , the notion of accelero-summation extends
      to formal expressions of the form (2) and more general
      elements of
, the notion of accelero-summation extends
      to formal expressions of the form (2) and more general
      elements of  as follows. Given
 as follows. Given  ,
,
       ,
,  and
 and  ,
      we simply define
,
      we simply define  . It can be checked that this
      definition is coherent when replacing
. It can be checked that this
      definition is coherent when replacing  by
 by  for some
 for some  .  By linearity, we
      thus obtain a natural differential subalgebra
.  By linearity, we
      thus obtain a natural differential subalgebra  of
      accelero-summable transseries with critical times
 of
      accelero-summable transseries with critical times  and in the multi-direction
      and in the multi-direction  . We also have natural
      analogues
. We also have natural
      analogues  and
 and  of
 of  and
 and  .
.
    
The main result we need from the theory of accelero-summation is the following theorem [É87, Bra91].
       be a formal solution to
      be a formal solution to  . Then
. Then  .
.
    
       be the canonical basis of formal solutions to
      be the canonical basis of formal solutions to  at
      the origin. We have
 at
      the origin. We have  .
.
    
 .
.
       
      
      
      We say that  is stable under Stokes
      morphisms is for all
 is stable under Stokes
      morphisms is for all  , there exists a
, there exists a  with
 with  , and if the same
      property is recursively satisfied by
, and if the same
      property is recursively satisfied by  . We denote
      by
. We denote
      by  the differential subring of
 the differential subring of  which is stable under Stokes morphisms. The mappings
      which is stable under Stokes morphisms. The mappings  will be called Stokes morphisms and we denote by
      will be called Stokes morphisms and we denote by  the group of all such maps.
 the group of all such maps.
    
       is fixed under
 is fixed under  . Then
. Then  is convergent.
 is convergent.
    
         admits a singularity at
        admits a singularity at  and choose
 and choose  maximal and
 maximal and  minimal. Modulo the
        removal of unnecessary critical times, we may assume without loss of
        generality that
 minimal. Modulo the
        removal of unnecessary critical times, we may assume without loss of
        generality that  . Let
. Let  with
        with  be a multi-direction satisfying (5),
        such that
 be a multi-direction satisfying (5),
        such that  for all
 for all  .
        Then
.
        Then
      
 
      
 . Now
. Now  is obtained by integration around
 is obtained by integration around  along the axis
          along the axis  . By classical properties of
          the Laplace integral [CNP93, Pré I.2], the
          function
. By classical properties of
          the Laplace integral [CNP93, Pré I.2], the
          function  cannot vanish, since
 cannot vanish, since  admits a singularity in
 admits a singularity in  (if
          the Laplace integrals corresponding to both directions
 (if
          the Laplace integrals corresponding to both directions  coincide, then the Laplace transform can be
          analytically continued to a larger sector, which is only possible if
 coincide, then the Laplace transform can be
          analytically continued to a larger sector, which is only possible if
           is analytic in a sector which contains both
          directions
 is analytic in a sector which contains both
          directions  ). We conclude that
). We conclude that  , so
, so  is not fixed under
 is not fixed under  .
.
         
        
         be a set of multi-directions
        be a set of multi-directions  satisfying (5), with
 satisfying (5), with  for exactly one
 for exactly one  , and so that for all
, and so that for all  , we have
        either
, we have
        either  or
 or  and
 and  for some small
 for some small  . For every
. For every
         , we have
, we have
      
 
      
        By looking more carefully at the proof of proposition 12,
        we observe that it suffices to assume that  is
        fixed under all Stokes morphisms of the form
 is
        fixed under all Stokes morphisms of the form  ,
        instead of all elements in
,
        instead of all elements in  .
.
      
        We say that  are equivalent, if
 are equivalent, if  for all
 for all  , where
, where  is the denominator of
        is the denominator of  . We notice that
. We notice that  is finite modulo this equivalent relation. We
        denote by
 is finite modulo this equivalent relation. We
        denote by  a subset of
 a subset of  with one element in each equivalence class.
        with one element in each equivalence class.
      
      Let us now come back to our differential equation  .
      Given
.
      Given  , the map
, the map  induces
      an isomorphism between
 induces
      an isomorphism between  and
 and  .
      We denote by
.
      We denote by  the unique matrix with
 the unique matrix with  . Given a second
. Given a second  , the vector
, the vector  is again in
 is again in  , whence
, whence  by repeating the argument. In particular, the Stokes
      morphism
 by repeating the argument. In particular, the Stokes
      morphism  induces the Stokes matrix
 induces the Stokes matrix  .
.
    
      We are now in the position that we can construct a finite set  of generators for the Galois group
 of generators for the Galois group  in a regular point
      in a regular point  .
.
    
          Algorithm Compute_generators 
        
            Input: an operator  and a
            regular point
 and a
            regular point  
          
            Output: a set  of
            generators for
 of
            generators for  
          
             
          
            for each  do
            do
          
          return  
        
       constructed as above, the differential Galois group
      constructed as above, the differential Galois group  is generated by
      is generated by  as a closed algebraic subgroup
      of
 as a closed algebraic subgroup
      of  .
.
    
 is
        fixed by each element of
 is
        fixed by each element of  . We have to prove
        that
. We have to prove
        that  . Given a singularity
. Given a singularity  ,
        let
,
        let  be the “continuation” of
 be the “continuation” of  along
 along  (which involves
        analytic continuation until
 (which involves
        analytic continuation until  followed by
        “decelero-unsummation”). By proposition 7, we
        have
 followed by
        “decelero-unsummation”). By proposition 7, we
        have  . From proposition 12 and
        remark 13, we next deduce that
. From proposition 12 and
        remark 13, we next deduce that  is
        convergent. Indeed, since
 is
        convergent. Indeed, since  , its realization
, its realization
         in the convolution model with critical time
 in the convolution model with critical time
         is a function in
 is a function in  .
        Consequently,
.
        Consequently,  whenever
 whenever  and
        and  are equivalent. At this point we have
        shown that
 are equivalent. At this point we have
        shown that  is meromorphic at
 is meromorphic at  .
        But a function which is meromorphic at all points of the Riemann
        sphere
.
        But a function which is meromorphic at all points of the Riemann
        sphere  is actually a rational function. It
        follows that
 is actually a rational function. It
        follows that  .
.
       
      
      
         for
 for  .
        Modulo the identification of functions in the formal model, the
        convolution models and the geometric model via accelero-summation, the
        pointed alien derivatives commute with the usual derivation
.
        Modulo the identification of functions in the formal model, the
        convolution models and the geometric model via accelero-summation, the
        pointed alien derivatives commute with the usual derivation  . Consequently, if
. Consequently, if  is a solution
        to
 is a solution
        to  , then we also have
, then we also have  .
        In particular, given the canonical basis of solutions
.
        In particular, given the canonical basis of solutions  to
        to  , there exists a unique matrix
, there exists a unique matrix  with
 with
      
 
      
        This equation is called the bridge equation. Since  admits only a finite number of singularities and the
        alien derivations “translate singularities”, we have
 admits only a finite number of singularities and the
        alien derivations “translate singularities”, we have  for some
 for some  , so the matrices
, so the matrices
         are nilpotent. More generally, if
 are nilpotent. More generally, if  are
 are  -linearly independent, then
        all elements in the algebra generated by
-linearly independent, then
        all elements in the algebra generated by  are
        nilpotent.
 are
        nilpotent.
      
        It is easily shown that the Stokes morphisms correspond to the
        exponentials  of directional Alien derivations
 of directional Alien derivations
         . This yields a way to reinterpret the Stokes
        matrices in terms of the
. This yields a way to reinterpret the Stokes
        matrices in terms of the  with
 with  .
        In particular, the preceding discussion implies that the Stokes
        matrices are unipotent.  The extra flexibility provided by pointwise
        over directional alien derivatives admits many applications, such as
        the preservation of realness [Men96]. For further
        details, see [É85, É87, É92, É93].
.
        In particular, the preceding discussion implies that the Stokes
        matrices are unipotent.  The extra flexibility provided by pointwise
        over directional alien derivatives admits many applications, such as
        the preservation of realness [Men96]. For further
        details, see [É85, É87, É92, É93].
      
      A complex number  is said to be
      effective if there exists an approximation algorithm
      for
 is said to be
      effective if there exists an approximation algorithm
      for  which takes
 which takes  on input
      and which returns an
 on input
      and which returns an  -approximation
-approximation  of
 of  for which
 for which  .
      The time complexity of this approximation algorithm is the time
.
      The time complexity of this approximation algorithm is the time
       it takes to compute a
 it takes to compute a  -approximation
      for
-approximation
      for  . It is not hard to show that the set
. It is not hard to show that the set  of effective complex numbers forms a field. However,
      given
 of effective complex numbers forms a field. However,
      given  the question whether
 the question whether  is undecidable. The following theorems were proved in [CC90,
      vdH99, vdH01b].
      is undecidable. The following theorems were proved in [CC90,
      vdH99, vdH01b].
    
         ,
,
         ,
,  and
 and  .
        Given a broken line path
.
        Given a broken line path   on
 on  ,
        we have
,
        we have
      
        
          
 of the analytic continuation of
 of the analytic continuation of
             at the end-point of
 at the end-point of  is effective.
            is effective.
           for
 for  , when not counting
            the approximation time of the input data
, when not counting
            the approximation time of the input data  ,
,
             and
 and  .
.
           as in (b) as a
            function of
 as in (b) as a
            function of  ,
,  and
 and
             .
.
          
         be regular singular in
        be regular singular in  . Let
. Let  ,
,
         and
 and  . Then
. Then  is well-defined for all sufficiently small
 is well-defined for all sufficiently small  on the effective Riemann surface
 on the effective Riemann surface  of
        of  above
 above  , and
, and
      
        
          
 is effective.
 is effective.
           for
 for  , when not counting
            the approximation time of the input data
, when not counting
            the approximation time of the input data  ,
,
             and
 and  .
.
           as in (b) as a
            function of
 as in (b) as a
            function of  ,
,  and
 and
             .
.
          
      In general, the approximation of  involves the
      existence of certain bounds. In each of the above theorems, the
      assertion (c) essentially states that there exists an algorithm
      for computing these bounds as a function of the input data. This
      property does not merely follow from (a) and (b)
      alone.
 involves the
      existence of certain bounds. In each of the above theorems, the
      assertion (c) essentially states that there exists an algorithm
      for computing these bounds as a function of the input data. This
      property does not merely follow from (a) and (b)
      alone.
    
The following theorem has been proved in [vdH05b].
         be singular in
        be singular in  . Let
. Let  be
        as in theorem 10 and
 be
        as in theorem 10 and  . Given
. Given  with
 with  and
 and  ,
        we have
,
        we have
      
        
          
 is effective.
 is effective.
           for
 for  , when not counting
            the approximation time of the input data
, when not counting
            the approximation time of the input data  ,
,
             and
 and  .
.
           as in (b) as a
            function of
 as in (b) as a
            function of  ,
,  and
 and
             .
.
          
      If we replace  by an arbitrary effective
      algebraically closed subfield
 by an arbitrary effective
      algebraically closed subfield  of
 of  , then the assertions (a) and (c) in the
      three above theorems remain valid (see [vdH05a, vdH03]
      in the cases of theorems 17 and 18), but the
      complexity in (b) usually drops back to
, then the assertions (a) and (c) in the
      three above theorems remain valid (see [vdH05a, vdH03]
      in the cases of theorems 17 and 18), but the
      complexity in (b) usually drops back to  .
      Notice also that we may replace
.
      Notice also that we may replace  by the
      transition matrix along
 by the
      transition matrix along  in each of the theorems.
      The following theorem summarizes the results from section 2.
 in each of the theorems.
      The following theorem summarizes the results from section 2.
    
         be an effective algebraically closed constant field of
        be an effective algebraically closed constant field of  . Then there exists an algorithm which takes
. Then there exists an algorithm which takes  and
 and  on input, and which computes
        a finite set
 on input, and which computes
        a finite set  , such that
, such that
      
        
          
 is generated by
 is generated by  as a closed algebraic subgroup of
            as a closed algebraic subgroup of  .
.
           , then the entries of the matrices in
, then the entries of the matrices in
             have time complexity
 have time complexity  .
.
          
 of effective complex numbers forms a field. Similarly,
        the set
 of effective complex numbers forms a field. Similarly,
        the set  of effective complex numbers with an
        approximation algorithm of time complexity
 of effective complex numbers with an
        approximation algorithm of time complexity  forms a field, since the operations
        forms a field, since the operations  ,
,  ,
,  and
 and  can
        all be performed in time
 can
        all be performed in time  . In particular, the
        classes of matrices with entries in
. In particular, the
        classes of matrices with entries in  resp.
        resp.  are stable under the same
        operations. Now in the algorithm Compute_generators,
        we may take broken-line paths with vertices above
 are stable under the same
        operations. Now in the algorithm Compute_generators,
        we may take broken-line paths with vertices above  for the
        for the  . Hence (a) and (b)
        follow from theorem 19(a) resp.
        (b) and the above observations.
. Hence (a) and (b)
        follow from theorem 19(a) resp.
        (b) and the above observations.
       
      
      Given  , we may endow
, we may endow  with
      an approximate zero-test for which
 with
      an approximate zero-test for which  if and only
      if
 if and only
      if  . We will denote this field by
. We will denote this field by  . Clearly, this zero-test is not compatible with the field
      structure of
. Clearly, this zero-test is not compatible with the field
      structure of  . Nevertheless, any finite
      computation, which can be carried out in
. Nevertheless, any finite
      computation, which can be carried out in  with an
      oracle for zero-testing, can be carried out in exactly the same way in
 with an
      oracle for zero-testing, can be carried out in exactly the same way in
       for a sufficiently small
 for a sufficiently small  .
      Given
.
      Given  , we will denote by
, we will denote by  the “cast” of
      the “cast” of  to
 to  and similarly for matrices with coefficients in
      and similarly for matrices with coefficients in  .
.
    
       usually come with a
      natural bound
 usually come with a
      natural bound  for
 for  . Then,
      given
. Then,
      given  with
 with  , it is even
      better to use the approximate zero-test
, it is even
      better to use the approximate zero-test  if and
      only if
 if and
      only if  . Notice that the bound
. Notice that the bound  usually depends on the internal representation of
      usually depends on the internal representation of  and not merely on
      and not merely on  as a number in
 as a number in  .
.
    
      Let  be an effective algebraically closed
      subfield of
 be an effective algebraically closed
      subfield of  . Consider a monic linear
      differential operator
. Consider a monic linear
      differential operator  , where
, where  .
      In this section, we present an algorithm for finding a non-trivial
      factorization
.
      In this section, we present an algorithm for finding a non-trivial
      factorization  with
 with  whenever such a factorization exists.
      whenever such a factorization exists.
    
 and invariant subspaces under
    and invariant subspaces under 
      Let  be a basis of solutions for the equation
 be a basis of solutions for the equation
       , where
, where  is
 is  an abstract differential field. We denote the Wronskian of
an abstract differential field. We denote the Wronskian of
       by
 by
    
 
    It is classical (and easy to check) that
|  | (7) | 
      When expanding the determinant  in terms of the
      matrices
 in terms of the
      matrices
    
 
    it follows that
 
    
      Denoting by  the logarithmic derivative of
 the logarithmic derivative of  , it can also be checked by induction that
, it can also be checked by induction that
    
 
    
      admits  as solutions, whence
 as solutions, whence  ,
      using Euclidean division in the skew polynomial ring
,
      using Euclidean division in the skew polynomial ring  .
.
    
        
          
 admits a factorization
 admits a factorization  ,
            then
,
            then  leaves
 leaves  invariant.
            invariant.
           is an invariant subvector space of
 is an invariant subvector space of  , then
, then  admits a
            factorization
 admits a
            factorization  with
 with  .
.
          
         admits a factorization
        admits a factorization  . Then, given
. Then, given  and
 and  , we have
, we have  ,
        whence
,
        whence  . Conversely, assume that
. Conversely, assume that  is an invariant subvector space of
 is an invariant subvector space of  and let
        and let  be a basis of
 be a basis of  .
        Then we observe that
.
        Then we observe that  for all
 for all  .
        Consequently,
.
        Consequently,
      
 
      
        for all  , so that
, so that  , by
        corollary 3. Hence
, by
        corollary 3. Hence
      
|  | (8) | 
 which vanishes on
          which vanishes on  . But this is only possible
          if
. But this is only possible
          if  divides
 divides  .
.
         
        
       be a non-unitary algebra of nilpotent matrices in
      be a non-unitary algebra of nilpotent matrices in  .
      Then there exists a basis of
.
      Then there exists a basis of  in which
 in which  is lower triangular for all
 is lower triangular for all  .
.
    
         be a matrix
        such that
 be a matrix
        such that  is a non-zero vector space of
        minimal dimension. Given
 is a non-zero vector space of
        minimal dimension. Given  and
 and  ,
        we claim that
,
        we claim that  . Assume the contrary, so that
. Assume the contrary, so that
         . By the minimality hypothesis, we must have
. By the minimality hypothesis, we must have
         . In particular,
. In particular,  and
 and
         . Again by the minimality hypothesis, it
        follows that
. Again by the minimality hypothesis, it
        follows that  . In other words, the restriction
        of
. In other words, the restriction
        of  to
 to  is an
        isomorphism on
 is an
        isomorphism on  . Hence
. Hence  admits a non-zero eigenvector in
        admits a non-zero eigenvector in  , which
        contradicts the fact that
, which
        contradicts the fact that  is nilpotent.
 is nilpotent.
      
 .
          If
.
          If  or
 or  , then we have
          nothing to do, so assume that
, then we have
          nothing to do, so assume that  and
 and  . We claim that
. We claim that  admits a
          non-trivial invariant subvector space
 admits a
          non-trivial invariant subvector space  .
          Indeed, we may take
.
          Indeed, we may take  if
 if  and
          and  if
 if  . Now consider
          a basis
. Now consider
          a basis  of
 of  and
          complete it to a basis
 and
          complete it to a basis  of
 of  .
          Then each matrix in
.
          Then each matrix in  is lower triangular with
          respect to this basis. Let
 is lower triangular with
          respect to this basis. Let  and
 and  be the algebras of lower dimensional matrices which
          occur as upper left resp. lower right blocks of
          matrices in
 be the algebras of lower dimensional matrices which
          occur as upper left resp. lower right blocks of
          matrices in  . We conclude by applying the
          induction hypothesis on
. We conclude by applying the
          induction hypothesis on  and
 and  .
.
         
        
      Let  be a finite set of non-zero nilpotent
      matrices. If all matrices in the
 be a finite set of non-zero nilpotent
      matrices. If all matrices in the  -algebra
-algebra  generated by
 generated by  are nilpotent,
      then it is easy to compute a basis for which all matrices in
 are nilpotent,
      then it is easy to compute a basis for which all matrices in  are lower triangular. Indeed, setting
 are lower triangular. Indeed, setting  for all
      for all  , we first compute a basis of
, we first compute a basis of  . We successively complete this basis into a basis of
. We successively complete this basis into a basis of  ,
,  and so on until
 and so on until  .
.
    
      If not all matrices in  are nilpotent, then the
      proof of lemma 23 indicates a method for the computation of
      a matrix in
 are nilpotent, then the
      proof of lemma 23 indicates a method for the computation of
      a matrix in  which is not nilpotent. Indeed, we
      start by picking an
 which is not nilpotent. Indeed, we
      start by picking an  and set
 and set  ,
      where
,
      where  is smallest with
 is smallest with  .
      We next set
.
      We next set  and iterate the following loop. Take
      a matrix
 and iterate the following loop. Take
      a matrix  and distinguish the following three
      cases:
 and distinguish the following three
      cases:
    
 
       and continue.
 and continue.
       
       ,
,  and continue.
 and continue.
       
       .
.
      
      At the end of our loop, we either found a non-nilpotent matrix, or we
      have  for all
 for all  . In the
      second case, we obtain a non-trivial invariant subspace of
. In the
      second case, we obtain a non-trivial invariant subspace of  as in the proof of lemma 23 and we
      recursively apply the algorithm on this subspace and a complement. In
      fact, the returned matrix is not even monopotent
      (i.e. not of the form
 as in the proof of lemma 23 and we
      recursively apply the algorithm on this subspace and a complement. In
      fact, the returned matrix is not even monopotent
      (i.e. not of the form  , where
, where  is a nilpotent matrix), since it both admits zero and
      a non-zero number as eigenvalues.
 is a nilpotent matrix), since it both admits zero and
      a non-zero number as eigenvalues.
    
      Proposition 22 in combination with theorem 20
      implies that the factorization of linear differential operators in  reduces to the computation of non-trivial invariant
      subvector spaces under the action of
 reduces to the computation of non-trivial invariant
      subvector spaces under the action of  whenever
      they exist.
 whenever
      they exist.
    
      In this section, we will first solve a slightly simpler problem:
      assuming that  is an effective algebraically
      closed field and given a finite set of matrices
 is an effective algebraically
      closed field and given a finite set of matrices  ,
      we will show how to compute a non-trivial invariant subspace
,
      we will show how to compute a non-trivial invariant subspace  of
 of  under the action of
 under the action of  , whenever such a
, whenever such a  exists.
 exists.
    

 it is easy to compute the smallest
      subspace
 it is easy to compute the smallest
      subspace  of
 of  which is
      invariant under the action of
 which is
      invariant under the action of  and which contains
 and which contains
       . Indeed, starting with a basis
. Indeed, starting with a basis  ,
      we keep enlarging
,
      we keep enlarging  with elements in
 with elements in  until saturation. Since
 until saturation. Since  will never
      contain more than
 will never
      contain more than  elements, this algorithm
      terminates. A candidate vector
 elements, this algorithm
      terminates. A candidate vector  for
      generating a non-trivial invariant subspace of
 for
      generating a non-trivial invariant subspace of  is said to be good if
      is said to be good if  .
.
    
     -algebra generated by
-algebra generated by 
 is an
      invariant subspace for
 is an
      invariant subspace for  , if and only if
, if and only if  is an invariant subspace for the
 is an invariant subspace for the  -algebra
-algebra
       generated by
 generated by  . Again it
      is easy to compute a basis for
. Again it
      is easy to compute a basis for  . We start with a
      basis
. We start with a
      basis  of
 of  and keep
      adjoining elements in
 and keep
      adjoining elements in  to
 to  until saturation. We will avoid the explicit basis of
      until saturation. We will avoid the explicit basis of  ,
      which may contain as much as
,
      which may contain as much as  elements, and
      rather focus on the efficient computation of good candidate vectors.
 elements, and
      rather focus on the efficient computation of good candidate vectors.
    
     -splittings
-splittings , where
, where  are non-empty
      vector spaces, will be called an
 are non-empty
      vector spaces, will be called an  -splitting
      of
-splitting
      of  , if the projections
, if the projections  of
      of  on
 on  are all in
 are all in  . Then, given
. Then, given  , we have
, we have  if and only if
 if and only if  for all
 for all  . If we choose a basis for
. If we choose a basis for  which is a union of bases for the
      which is a union of bases for the  , we notice
      that the
, we notice
      that the  are
 are  block
      matrices. In the above algorithm for computing the
 block
      matrices. In the above algorithm for computing the  -algebra
      generated by
-algebra
      generated by  it now suffices to compute with
      block matrices of this form. In particular, the computed basis of
 it now suffices to compute with
      block matrices of this form. In particular, the computed basis of  will consist of such matrices. The trivial
      decomposition
 will consist of such matrices. The trivial
      decomposition  is clearly an
 is clearly an  -splitting.
      Given
-splitting.
      Given  , we notice that any
, we notice that any  -splitting
      is also an
-splitting
      is also an  -splitting.
-splitting.
    
     -splittings
-splittings -splitting
-splitting  is said to be
      finer than the
 is said to be
      finer than the  -splitting
-splitting  if
 if  is a direct sum of a subset of
      the
 is a direct sum of a subset of
      the  for each
 for each  . Given an
. Given an
      
 -splitting
-splitting  and an arbitrary element
 and an arbitrary element  , we may
      obtain a finer
, we may
      obtain a finer  -splitting w.r.t
-splitting w.r.t
       as follows. Let
 as follows. Let  and
      consider
 and
      consider  . If
. If  are the
      eigenvalues of
 are the
      eigenvalues of  , then
, then  is
      an
 is
      an  -splitting of
-splitting of  , where
, where
       . Collecting these
. Collecting these  -splittings,
      we obtain a finer
-splittings,
      we obtain a finer  -splitting
-splitting  of
      of  . This
. This  -splitting,
      which is said to be refined w.r.t.
-splitting,
      which is said to be refined w.r.t.  ,
      has the property that
,
      has the property that  is monopotent on
 is monopotent on  for each
 for each  , with unique eigenvalue
, with unique eigenvalue
       .
.
    
    
      We now have the following algorithm for computing non-trivial  -invariant subspaces of
-invariant subspaces of  when they
      exist.
 when they
      exist.
    
          Algorithm Invariant_subspace 
        
            Input: a set of non-zero matrices in  
          
            Output: an  -invariant
            subspace of
-invariant
            subspace of  or fail
 or fail
          
            Step 1. [Initial  -splitting]
-splitting]
          
                Compute a “random non-zero element”  of
 of  
              
                Compute an  -splitting
-splitting  w.r.t.
                w.r.t.  and each
 and each  
              
                 
              
Step 2. [One dimensional components]
                For every  with
 with  and
                and  , do the following:
, do the following:
              
                  Pick a  and compute
 and compute  
                
                  If  then return
 then return  
                
                  Otherwise, set  
                
                If  for all
 for all  then return fail
                then return fail
              
Step 3. [Higher dimensional components]
                Let  be such that
 be such that  
              
                Let  
              
                Let  
              
                If  then go to step 4 and otherwise to
                step 5
 then go to step 4 and otherwise to
                step 5
              
Step 4. [Non-triangular case]
                Let  be non-monopotent on
 be non-monopotent on  (cf. previous section)
 (cf. previous section)
              
                Refine the  -splitting
                w.r.t.
-splitting
                w.r.t.  and return to step
                2
 and return to step
                2
              
Step 5. [Potentially triangular case]
                Choose  and compute
 and compute  
              
                If  then return
 then return  
              
                Otherwise, let  be in
 be in  with
                with  
              
                Refine the  -splitting
                w.r.t.
-splitting
                w.r.t.  
              
                If this yields a finer  -splitting then
                return to step 2
-splitting then
                return to step 2
              
            Otherwise, set  and repeat step 5
 and repeat step 5
          
      The algorithm needs a few additional explanations. In step  , we may take
, we may take  to be an arbitrary
      element in
 to be an arbitrary
      element in  . However, it is better to take a
      “small random expression in the elements of
. However, it is better to take a
      “small random expression in the elements of  ”
      for
”
      for  . With high probability, this yields an
. With high probability, this yields an  -splitting which will not need to be refined in the
      sequel. Indeed, the subset of matrices in
-splitting which will not need to be refined in the
      sequel. Indeed, the subset of matrices in  which
      yield non-maximal
 which
      yield non-maximal  -splittings is a closed
      algebraic subset of measure zero, since it is determined by coinciding
      eigenvalues. In particular, given an
-splittings is a closed
      algebraic subset of measure zero, since it is determined by coinciding
      eigenvalues. In particular, given an  -splitting
-splitting
       w.r.t.
 w.r.t.  , it
      will usually suffice to check that each
, it
      will usually suffice to check that each  is
      monopotent on each
 is
      monopotent on each  , in order to obtain an
, in order to obtain an  -splitting w.r.t. the other elements in
-splitting w.r.t. the other elements in
       .
.
    
      Throughout the algorithm, the  -splitting gets
      finer and finer, so the
-splitting gets
      finer and finer, so the  -splitting ultimately
      remains constant. From this point on, the space
-splitting ultimately
      remains constant. From this point on, the space  can only strictly decrease in step
      can only strictly decrease in step  , so
, so  also remains constant, ultimately. But then we either find
      a non-trivial invariant subspace in step 5, or all components of the
 also remains constant, ultimately. But then we either find
      a non-trivial invariant subspace in step 5, or all components of the
       -splitting become one-dimensional. In the latter
      case, we either obtain a non-trivial invariant subspace in step 1, or a
      proof that
-splitting become one-dimensional. In the latter
      case, we either obtain a non-trivial invariant subspace in step 1, or a
      proof that  for every
 for every  (and thus for every
      (and thus for every  ).
).
    
       is no longer an effective algebraically closed field, but rather a field
      is no longer an effective algebraically closed field, but rather a field
       with an approximate zero-test. In that case, we
      recall that a number which is approximately zero is not necessarily
      zero. On the other hand, a number which is not approximately zero is
      surely non-zero. Consequently, in our algorithm for the
      computation of
 with an approximate zero-test. In that case, we
      recall that a number which is approximately zero is not necessarily
      zero. On the other hand, a number which is not approximately zero is
      surely non-zero. Consequently, in our algorithm for the
      computation of  , the dimension of
, the dimension of  can be too small, but it is never too large. In
      particular, if the algorithm Invariant_subspace fails,
      then the approximate proof that
 can be too small, but it is never too large. In
      particular, if the algorithm Invariant_subspace fails,
      then the approximate proof that  for every
 for every  yields a genuine proof that there are no non-trivial
      invariant subspaces.
 yields a genuine proof that there are no non-trivial
      invariant subspaces.
    
      Putting together the results from the previous sections, we now have the
      following algorithm for finding a right factor of  .
.
    
          Algorithm Right_factor 
        
            Input:  
          
            Output: a non-trivial right-factor of  or fail
 or fail
          
Step 1. [Compute generators]
                Choose  and let
 and let  
              
                Compute a finite set  of generators for
 of generators for
                 (cf. theorem 20)
 (cf. theorem 20)
              
Step 2. [Initial precision]
                 
              
                 
              
                while  do
                do  
              
                 
              
Step 3. [Produce invariant subspace]
                Let  
              
                If  then return fail
 then return fail
              
                Let  be a column basis of
 be a column basis of  
              
                Let  
              
Step 4. [Produce and check guess]
                Let  
              
                Divide  by
 by  ,
                producing
,
                producing  with
 with  
              
                If  then go to step 5
 then go to step 5
              
                Reconstruct  from
 from  and
                and  with precision
 with precision  
              
                If we obtain no good approximations or  then go to step 5
                then go to step 5
              
                Return  
              
Step 5. [Increase precision]
                 
              
                 
              
Go to step 3
      The main idea behind the algorithm is to use proposition 22
      in combination with Invariant_subspace so as to provide
      good candidate right factors of  in
 in  . Using reconstruction of coefficients in
. Using reconstruction of coefficients in  from Laurent series in
      from Laurent series in  with increasing
      precisions, we next produce good candidate right factors in
 with increasing
      precisions, we next produce good candidate right factors in  . We keep increasing the precision until we find a right
      factor or a proof that
. We keep increasing the precision until we find a right
      factor or a proof that  is irreducible. Let us
      detail the different steps a bit more:
 is irreducible. Let us
      detail the different steps a bit more:
    
 terms and approximate zero-tests in
        terms and approximate zero-tests in  . The
        degree of a rational function
. The
        degree of a rational function  is defined by
 is defined by
         . The initial precisions
. The initial precisions  and
        and  have been chosen as small as possible.
        Indeed, we want to take advantage of a possible quick answer when
        computing with a small precision (see also the explanations below of
        step 5).
 have been chosen as small as possible.
        Indeed, we want to take advantage of a possible quick answer when
        computing with a small precision (see also the explanations below of
        step 5).
       , by remark 24.
        Effective power series and Laurent series are defined in a similar way
        as effective real numbers (in particular, we don't assume the
        existence of an effective zero-test). Efficient algorithm for such
        computations are described in [vdH02].
, by remark 24.
        Effective power series and Laurent series are defined in a similar way
        as effective real numbers (in particular, we don't assume the
        existence of an effective zero-test). Efficient algorithm for such
        computations are described in [vdH02].
       and
 and  from
        from  and
 and  contains two
        ingredients: we use Padé approximation to find rational
        function approximations of degree
 contains two
        ingredients: we use Padé approximation to find rational
        function approximations of degree  and the
        LLL-algorithm to approximate numbers
 and the
        LLL-algorithm to approximate numbers  by
        numbers in
 by
        numbers in  .
.
      
      The problem of reconstructing elements in  from
      elements in
 from
      elements in  is an interesting topic on its own.
      In theory, one may consider the polynomial algebra over
 is an interesting topic on its own.
      In theory, one may consider the polynomial algebra over  generated by all coefficients occurring in
      generated by all coefficients occurring in  and
      the number
 and
      the number  we wish to reconstruct. We may then
      apply the LLL-algorithm [LLL82] on the lattice spanned by
 we wish to reconstruct. We may then
      apply the LLL-algorithm [LLL82] on the lattice spanned by
       monomials of smallest total degree (for
      instance) and search for minimal
 monomials of smallest total degree (for
      instance) and search for minimal  -digit
      relations. If
-digit
      relations. If  is the algebraic closure of
 is the algebraic closure of  , then we may simply use the lattice spanned by the
      first
, then we may simply use the lattice spanned by the
      first  powers of
 powers of  .
.
    
      At a sufficiently large precision  , the
      LLL-algorithm will ultimately succeed for all coefficients of a
      candidate factorization which need to be reconstructed. If there are no
      factorizations, then the algorithm will ultimately fail at step 3. This
      proofs the termination of Right_factor.
, the
      LLL-algorithm will ultimately succeed for all coefficients of a
      candidate factorization which need to be reconstructed. If there are no
      factorizations, then the algorithm will ultimately fail at step 3. This
      proofs the termination of Right_factor.
    
       , it would be nice to use more of the structure of the
      original problem. For instance, a factorization of
, it would be nice to use more of the structure of the
      original problem. For instance, a factorization of  actually yields relations on the coefficients which we may try to use.
      For high precision computations, it is also recommended to speed the
      LLL-algorithm up using a similar dichotomic algorithm as for fast
      g.c.d. computations [Moe73, PW02].
      actually yields relations on the coefficients which we may try to use.
      For high precision computations, it is also recommended to speed the
      LLL-algorithm up using a similar dichotomic algorithm as for fast
      g.c.d. computations [Moe73, PW02].
    
       is available, using techniques from [BB85, vH97, vdPS03], then one may
      take
 is available, using techniques from [BB85, vH97, vdPS03], then one may
      take  instead of
 instead of  in step
      5. Of course, bounds for the required precision
 in step
      5. Of course, bounds for the required precision  are even harder to obtain. See [BB85] for some results in
      that direction.
      are even harder to obtain. See [BB85] for some results in
      that direction.
    
      Throughout this section,  will stand for the
      field
 will stand for the
      field  of effective complex number with the
      approximate zero-test at precision
 of effective complex number with the
      approximate zero-test at precision  . This field
      has the following properties:
. This field
      has the following properties:
    
 .
.
       and which computes a finite set of generators for the
        and which computes a finite set of generators for the  -vector
        space of integers
-vector
        space of integers  with
 with  .
.
       and which computes a finite set of generators for the
        and which computes a finite set of generators for the  -vector
        space of integers
-vector
        space of integers  with
 with  .
.
       is closed under exponentiation and logarithm.
 is closed under exponentiation and logarithm.
      
      Indeed, we obtain EH2 and EH3 using
      the LLL-algorithm. Some of the results in this section go through when
      only a subset of the conditions are satisfied. In that case, we notice
      that EH2 EH1,
      EH3
EH1,
      EH3 EH1 and
      EH4
EH1 and
      EH4 (EH2
(EH2 EH3).
EH3).
    
      Given a finite set of matrices  , we give a
      numerical algorithm for the computation of the smallest closed algebraic
      subgroup
, we give a
      numerical algorithm for the computation of the smallest closed algebraic
      subgroup  of
 of  which
      contains
 which
      contains  . We will represent
. We will represent  by a finite set
      by a finite set  and the finite basis
 and the finite basis  of a Lie algebra
 of a Lie algebra  over
 over  , such that
, such that
    
 
    
      and each  corresponds to a unique connected
      component
 corresponds to a unique connected
      component  of
 of  . We will
      also prove that there exists a precision
. We will
      also prove that there exists a precision  such
      that the algorithm yields the theoretically correct result for all
 such
      that the algorithm yields the theoretically correct result for all  .
.
    
      Let  be the group of invertible diagonal
      matrices. Each matrix
 be the group of invertible diagonal
      matrices. Each matrix  has the form
 has the form  , where
, where  is the vector in
 is the vector in  of the elements on the diagonal of
 of the elements on the diagonal of  .
      The coordinate ring
.
      The coordinate ring  of
 of  is the set
      is the set  of Laurent polynomials in
 of Laurent polynomials in  .
.
    
      Now consider the case when  consists of a single
      diagonal matrix
 consists of a single
      diagonal matrix  . Let
. Let  be
      the ideal which defines
 be
      the ideal which defines  . Given a relation
. Given a relation  between the
 between the  , any power
, any power  satisfies the same relation
 satisfies the same relation  ,
      whence
,
      whence  . Let
. Let  be the ideal
      generated by all
 be the ideal
      generated by all  , such that
, such that  .
.
    
         .
        Assuming for contradiction that
.
        Assuming for contradiction that  , choose
, choose
      
 
      
 is minimal. If
 is minimal. If  ,
          then
,
          then  , since otherwise
, since otherwise  and
          and  has less than
 has less than  terms. In particular, the vectors
          terms. In particular, the vectors  with
 with  are linearly independent. But this contradicts
          the fact that
 are linearly independent. But this contradicts
          the fact that  for all
 for all  .
.
         
        
      By EH2, we may compute a minimal finite set  of generators for the
 of generators for the  -vector space
      of
-vector space
      of  with
 with  . We may also
      compute a basis
. We may also
      compute a basis  for
 for  ,
      where
,
      where  . Then
. Then  is the
      connected component of
 is the
      connected component of  , since
, since  for all
      for all  , and
, and  cannot be
      further enlarged while conserving this property.
 cannot be
      further enlarged while conserving this property.
    
      Let  . We construct a basis
. We construct a basis  of
      of  , by taking
, by taking  to be
      shortest in
 to be
      shortest in  (if
 (if  ) or
) or  (if
 (if  ), such that
), such that  . This basis determines a toric change of coordinates
. This basis determines a toric change of coordinates  with
 with  such that
 such that  with respect to the new coordinates. Similarly, we may
      construct a basis
 with respect to the new coordinates. Similarly, we may
      construct a basis  of
 of  , by
      taking each
, by
      taking each  to be shortest in
 to be shortest in  such that
      such that  is maximal. This basis determines a
      second toric change of coordinates
 is maximal. This basis determines a
      second toric change of coordinates  with
 with  such that
 such that  (
 ( )
      with respect to the new coordinates.
)
      with respect to the new coordinates.
    
      After the above changes of coordinates, the ideal  is determined by the equations
      is determined by the equations  . Setting
. Setting
    
 
    
      it follows that  . Rewriting
. Rewriting  with respect to the original coordinates now completes the computation
      of
      with respect to the original coordinates now completes the computation
      of  .
.
    
      Let us now consider the case when  consists of a
      single arbitrary matrix
 consists of a
      single arbitrary matrix  . Then we first compute
      the multiplicative Jordan decomposition of
. Then we first compute
      the multiplicative Jordan decomposition of  .
      Modulo a change of basis of
.
      Modulo a change of basis of  , this means that
, this means that
    
 
    
      where  and
 and  are the
      semi-simple and unipotent parts of
 are the
      semi-simple and unipotent parts of  :
:
    
 
    where
 
    
       
    
       is
      well-defined for power series
 is
      well-defined for power series  and nilpotent
      matrices
 and nilpotent
      matrices  ; in this case,
; in this case,  and
      and  .
.
    
 , so assume
, so assume  . Let
. Let  .
        We clearly have
.
        We clearly have  , since
, since  is a closed algebraic group which contains
        is a closed algebraic group which contains  .
        Moreover, the set
.
        Moreover, the set  is infinite, so
 is infinite, so  . Since
. Since  is irreducible and
 is irreducible and  , we conclude that
, we conclude that  .
.
       
      
       .
.
    
 is a
        commutative group, [Hum81, Theorem 15.5] implies that
 is a
        commutative group, [Hum81, Theorem 15.5] implies that
         , where
, where  and
 and  are closed subgroups of
 are closed subgroups of  . Now
. Now
         and
 and  are closed
        subgroups of
 are closed
        subgroups of  resp.
 resp.  , so
, so  is a closed subgroup of
 is a closed subgroup of  . Since
. Since  , it follows that
, it follows that
         .
.
       
      
       , then
, then
       .
.
    
      In order to compute the closure of the product of a finite number of
      algebraic groups of the form  , an important
      subproblem is to test whether a given matrix
, an important
      subproblem is to test whether a given matrix  belongs to
      belongs to  .
.
    
      We first observe that  implies
 implies  .
      After the computation of
.
      After the computation of  and
 and  with
      with  it therefore suffices to check that
 it therefore suffices to check that  and
 and  . In fact, it suffices to
      check whether
. In fact, it suffices to
      check whether  , where
, where  is
      the unique matrix in
 is
      the unique matrix in  with
 with  .
      Modulo a suitable base change, we have thus reduced the general problem
      to the case when
.
      Modulo a suitable base change, we have thus reduced the general problem
      to the case when  is a diagonal matrix whose
      eigenvalues are all roots of unity.
 is a diagonal matrix whose
      eigenvalues are all roots of unity.
    
      Assume that  and
 and  are such
      that
 are such
      that  . Since
. Since  and
 and  commute, it follows that
 commute, it follows that  and
 and
       can be diagonalized w.r.t. a common
      basis. The elements of this basis are elements of the different
      eigenspaces of
 can be diagonalized w.r.t. a common
      basis. The elements of this basis are elements of the different
      eigenspaces of  . In other words, if
. In other words, if  with pairwise distinct
 with pairwise distinct  , then
, then  is diagonal for some block matrix
 is diagonal for some block matrix  with
      with  for each
 for each  . It
      follows that
. It
      follows that  for certain
 for certain  .
      Without loss of generality, we may therefore replace
.
      Without loss of generality, we may therefore replace  by the intersection of
      by the intersection of  with
 with  .
.
    
      From now on, we assume that the above two reductions have been made. Let
       be a diagonal matrix in
 be a diagonal matrix in  .
      By lemma 27, we have
.
      By lemma 27, we have  if and only if
      any
 if and only if
      any  -linear relation
-linear relation  induces a relation
      induces a relation  , where
, where  .
      Now consider a random matrix
.
      Now consider a random matrix  in
 in  ,
      i.e. a linear combination of the basis elements with small
      random integer coefficients. We compute its blockwise Jordan normal form
,
      i.e. a linear combination of the basis elements with small
      random integer coefficients. We compute its blockwise Jordan normal form
       so that
 so that  and let
 and let  be the restriction of
 be the restriction of  to the
      diagonal. We have
 to the
      diagonal. We have  . Computing a basis for the
. Computing a basis for the
       -linear relations of the form
-linear relations of the form  using EH3, the above criterion now enables us to check
      whether
      using EH3, the above criterion now enables us to check
      whether  .
.
    
      If the check whether  succeeds, then we are
      clearly done. Otherwise, since
 succeeds, then we are
      clearly done. Otherwise, since  was chosen in a
      random way, the relation
 was chosen in a
      random way, the relation  is very likely to be
      satisfied for all possible choices of
 is very likely to be
      satisfied for all possible choices of  (up to
      permutations of coordinates inside each block). Indeed, the
 (up to
      permutations of coordinates inside each block). Indeed, the  for which this is not the case lie on a countable union
 for which this is not the case lie on a countable union
       of algebraic variety of a lower dimension, so
 of algebraic variety of a lower dimension, so
       has measure
 has measure  .
      Heuristically speaking, we may therefore conclude that
.
      Heuristically speaking, we may therefore conclude that  if the check fails (at least temporarily, modulo some final checks when
      the overall computation of
      if the check fails (at least temporarily, modulo some final checks when
      the overall computation of  will be completed).
 will be completed).
    
      Theoretically speaking, we may perform the above computations with  instead of
 instead of  , where
, where  is a basis of
 is a basis of  and the
 and the  are formal parameters. We then check whether the relation
 are formal parameters. We then check whether the relation
       is still satisfied for the analogue
 is still satisfied for the analogue  of
 of  . If so, then we are sure that
. If so, then we are sure that
       . Otherwise, we keep trying with other random
      elements of
. Otherwise, we keep trying with other random
      elements of  .
.
    
      It is likely that a more efficient theoretical algorithm can be designed
      for testing  -linear relations between the
      eigenvalues of elements in
-linear relations between the
      eigenvalues of elements in  . One of the referees
      suggested to use similar methods as in [Mas88, Ber95,
      CS98]. However, we did not study this topic in more detail,
      since our final algorithm for the computation of Galois groups will be
      based on heuristics anyway. We also notice that a “really
      good” random number generator should actually never
      generate points which satisfy non-trivial algebraic relations.
. One of the referees
      suggested to use similar methods as in [Mas88, Ber95,
      CS98]. However, we did not study this topic in more detail,
      since our final algorithm for the computation of Galois groups will be
      based on heuristics anyway. We also notice that a “really
      good” random number generator should actually never
      generate points which satisfy non-trivial algebraic relations.
    

      A Lie algebra  is said to be algebraic,
      if it is the Lie algebra of some algebraic group, i.e. if
 is said to be algebraic,
      if it is the Lie algebra of some algebraic group, i.e. if
       is an algebraic subset of
 is an algebraic subset of  .
      It is classical [Bor91, Corollary 7.7] that the smallest
      Lie algebra generated by a finite number of algebraic Lie algebras is
      again algebraic. The Lie algebras we will consider in our algorithms
      will always assumed to be algebraic. Given a finite number
.
      It is classical [Bor91, Corollary 7.7] that the smallest
      Lie algebra generated by a finite number of algebraic Lie algebras is
      again algebraic. The Lie algebras we will consider in our algorithms
      will always assumed to be algebraic. Given a finite number  of algebraic Lie algebras and a basis
 of algebraic Lie algebras and a basis  for
      for  , it is easy to enrich
, it is easy to enrich  so that
      so that  is a Lie algebra: as long as
 is a Lie algebra: as long as  for two elements
 for two elements  , we add
, we add  to
 to  . By what precedes, the computed
      Lie algebra
. By what precedes, the computed
      Lie algebra  is again algebraic.
 is again algebraic.
    
      Putting together the ingredients from the previous sections, we now have
      the following algorithm for computing the smallest closed algebraic
      group  which contains
 which contains  .
.
    
          Algorithm Closure( )
)
        
            Input: A subset  of
 of  
          
            Output: a numeric approximation of  
          
Step 1. [Initialize algorithm]
                Compute  for each
 for each  
              
                Let  (notice that
 (notice that  )
)
              
                Let  
              
Step 2. [Closure]
                While there exists an  with
 with  set
 set  
              
                While there exists an  with
 with  set
 set  
              
                While there exists  with
 with  do
 do
              
                  Compute  
                
                  If  then set
 then set  ,
                  quit loop and repeat step 2
,
                  quit loop and repeat step 2
                
                  Otherwise, set  
                
            Return  
          
The termination of this algorithm relies on a lemma, whose proof was kindly communicated to the author by J.-Y. Hée.
       be a
      closed algebraic subgroup of
 be a
      closed algebraic subgroup of  and let
 and let  be a finite number of matrices in the normalizer of
 be a finite number of matrices in the normalizer of  . Denote by
. Denote by  the group
      generated by
 the group
      generated by  and
 and  . If all
      elements in
. If all
      elements in  have finite order, then
 have finite order, then  is finite.
 is finite.
    
 ,
        the result is classical [Dix71, Theorem 9.2]. In the
        general case, the normalizer
,
        the result is classical [Dix71, Theorem 9.2]. In the
        general case, the normalizer  of
 of  is a closed algebraic subgroup of
 is a closed algebraic subgroup of  and
        and  is a normal subgroup of
 is a normal subgroup of  .
        By [Bor91, Theorem 6.8 and Proposition 1.10], it follows
        that
.
        By [Bor91, Theorem 6.8 and Proposition 1.10], it follows
        that  is an affine algebraic group which is
        isomorphic to a closed algebraic matrix group. This reduces the
        general case to the special case when
 is an affine algebraic group which is
        isomorphic to a closed algebraic matrix group. This reduces the
        general case to the special case when  .
.
       
      
       such that, for every
      such that, for every  with
 with  ,
      the set
,
      the set  returned by
 returned by  , coincides with the
      smallest closed algebraic subgroup
, coincides with the
      smallest closed algebraic subgroup  of
 of  which contains
 which contains  .
.
    
         increases throughout the execution of the algorithm, so
        it remains ultimately constant. At this point, the set
 increases throughout the execution of the algorithm, so
        it remains ultimately constant. At this point, the set  will keep growing and the lemma implies that
 will keep growing and the lemma implies that  ultimately stabilizes. When this happens,
 ultimately stabilizes. When this happens,  is closed under multiplication modulo
 is closed under multiplication modulo  ,
        as well as under multiplicative inverses, since each element in
,
        as well as under multiplicative inverses, since each element in  has finite order modulo
 has finite order modulo  . We
        conclude that
. We
        conclude that  is indeed the smallest closed
        algebraic subgroup of
 is indeed the smallest closed
        algebraic subgroup of  which contains
 which contains  , provided that the approximate zero-test always returns
        the right result.
, provided that the approximate zero-test always returns
        the right result.
      
 be a sufficient precision such that all
          zero-tests in this execution tree are still correct when we replace
          the infinite precision by a precision
 be a sufficient precision such that all
          zero-tests in this execution tree are still correct when we replace
          the infinite precision by a precision  . Then
          the trace of the execution any finite precision
. Then
          the trace of the execution any finite precision  coincides with the trace of the execution at infinite precision.
          This completes the proof.
          coincides with the trace of the execution at infinite precision.
          This completes the proof.
         
        
      
      Assume now that  is the set of generators for
 is the set of generators for
       as computed in theorem 20. Assume
      that we have computed a reasonable candidate
 as computed in theorem 20. Assume
      that we have computed a reasonable candidate  for
 for
       , expressed in the original basis corresponding
      to
, expressed in the original basis corresponding
      to  . We still have to reconstruct
. We still have to reconstruct  and
 and  with
 with  such that
      such that 
 .
.
    
      In the case of  , by selecting a suitable basis of
, by selecting a suitable basis of
       , we may consider
, we may consider  as a
      big
 as a
      big  matrix whose first
 matrix whose first  columns are linearly independent. We compute the row-echelon form of
      this basis:
      columns are linearly independent. We compute the row-echelon form of
      this basis:
    
 
    
      The entries of  must be in
 must be in  :
      provided that
:
      provided that  is indeed generated by a basis of
      matrices with entries in
 is indeed generated by a basis of
      matrices with entries in  , the row-echelon form
      of this second basis coincides with
, the row-echelon form
      of this second basis coincides with  . It
      therefore suffices to reconstruct the entries of
. It
      therefore suffices to reconstruct the entries of  using the LLL-algorithm.
      using the LLL-algorithm.
    
      In the case of a matrix  , the set
, the set  is an algebraic variety of dimension
 is an algebraic variety of dimension  over
      over  . Now choose
. Now choose  close
      to
 close
      to  in such a way that
 in such a way that  independent coordinates of
      independent coordinates of  are all in
 are all in  . Then the other coordinates of
. Then the other coordinates of  ,
      considered as elements of
,
      considered as elements of  , are easily found
      using Newton's method. Since
, are easily found
      using Newton's method. Since  is an algebraic
      variety, these other coordinates are actually in
 is an algebraic
      variety, these other coordinates are actually in  ,
      and we reconstruct them using the LLL-algorithm.
,
      and we reconstruct them using the LLL-algorithm.
    
      The algorithm Closure from the previous section is quite
      inefficient when the set  becomes large. It is
      therefore useful to seek for a better computational representation of
 becomes large. It is
      therefore useful to seek for a better computational representation of
       . For finite groups
. For finite groups  , one
      classical idea is to search for a sequence of subgroups
, one
      classical idea is to search for a sequence of subgroups
    
|  | (9) | 
      such that the indices  are small. Then we may
      represent elements in
 are small. Then we may
      represent elements in  by sequences
 by sequences  with
 with  for each
 for each  .
      This representation is particularly useful if
.
      This representation is particularly useful if  operates on a set
      operates on a set  and if there exists points
 and if there exists points
       in
 in  such that
 such that
    
 
    
      is the stabilizer of the set  for each
 for each  . Then the set
. Then the set  corresponds to the
      orbit of
 corresponds to the
      orbit of  while leaving
 while leaving  fixed [Sim70, Sim71].
      fixed [Sim70, Sim71].
    
      In the case of matrix groups, one often takes  for
      for  [MO95]. However, this approach
      only yields interesting results when there exist non-trivial invariant
      subspaces under the action of the group, which will usually not be the
      case for us (otherwise we may factor
 [MO95]. However, this approach
      only yields interesting results when there exist non-trivial invariant
      subspaces under the action of the group, which will usually not be the
      case for us (otherwise we may factor  and
      consider smaller problems). A theoretical way out of this is to also
      consider the action of
 and
      consider smaller problems). A theoretical way out of this is to also
      consider the action of  on exterior powers
 on exterior powers  . However, this approach is very expensive from a
      computational point of view. In our more specific context of matrices
      with complex entries, we will therefore combine two other approaches:
      non-commutative lattice reduction and the operation of
. However, this approach is very expensive from a
      computational point of view. In our more specific context of matrices
      with complex entries, we will therefore combine two other approaches:
      non-commutative lattice reduction and the operation of  on
      on  via conjugations
 via conjugations  .
.
    
      The algebra  admits a natural (multiplicative)
      norm, given by
 admits a natural (multiplicative)
      norm, given by
    
 
    
      where  stands for the Euclidean norm on
 stands for the Euclidean norm on  . If
. If  is finite, this enables us to
      construct
 is finite, this enables us to
      construct  as in (9) as follows.
      Assuming that
 as in (9) as follows.
      Assuming that  have been constructed, we consider
      a matrix
 have been constructed, we consider
      a matrix  for which
 for which  is
      minimal, and let
 is
      minimal, and let  be the set generated by
 be the set generated by  and
 and  in
 in  .
      This construction allows us to rapidly identify a big commutative part
      of
.
      This construction allows us to rapidly identify a big commutative part
      of  . More precisely, we have
. More precisely, we have
    
         be such that
 be such that  and
 and  . Then we have
. Then we have
      
 
        
         and
 and  , we expand
, we expand  and
 and  in
 in  . This yields a
        non-commutative power series in
. This yields a
        non-commutative power series in  and
 and  whose terms in
 whose terms in  and
 and  vanish. It follows that
 vanish. It follows that
      
 
           
        
      The proposition implies that  whenever
 whenever  . Now take
. Now take  and
 and  with
      with  , where the
, where the  are as
      above. Then it follows that
 are as
      above. Then it follows that  and
 and  commute whenever
      commute whenever  . What is more, proposition 34 shows that taking commutators is a convenient way to
      construct matrices close to identity from a set of non-commutative
      generators.
. What is more, proposition 34 shows that taking commutators is a convenient way to
      construct matrices close to identity from a set of non-commutative
      generators.
    
      However, from the effective point of view, we will not compute the
      exact computation of minimal representatives  in cosets of algebraic groups in detail. We will rather simulate such a
      computation in a way which is sufficient for our purpose. If
      in cosets of algebraic groups in detail. We will rather simulate such a
      computation in a way which is sufficient for our purpose. If  is such that
 is such that  is small, then we
      will also try to use the fact that the centralizer
 is small, then we
      will also try to use the fact that the centralizer  of
      of  is often a big subgroup of
 is often a big subgroup of  ,
      so the orbit of
,
      so the orbit of  is small.
 is small.
    
      Let  be a closed algebraic subgroup of
 be a closed algebraic subgroup of  with associated Lie-algebra
 with associated Lie-algebra  . In
      this section, we will show how to compute efficiently with elements of
      the finite group
. In
      this section, we will show how to compute efficiently with elements of
      the finite group  . Until the very end of this
      section, we assume that
. Until the very end of this
      section, we assume that  is included in the
      connected component of the normalizer of
 is included in the
      connected component of the normalizer of  in
 in  . We denote by
. We denote by  the Lie algebra
      of this connected component. By [Hum81, Theorem 13.3], we
      have
 the Lie algebra
      of this connected component. By [Hum81, Theorem 13.3], we
      have  .
.
    
 and
      recall that
 and
      recall that  belongs to the normalizer of
 belongs to the normalizer of  . If
. If  , then
, then  also belongs to the normalizer of
      also belongs to the normalizer of  . Since
. Since  lies in the connected component of this normalizer,
      we have
 lies in the connected component of this normalizer,
      we have  . Now consider the orthogonal supplement
. Now consider the orthogonal supplement
       of
 of  for the Hermitian
      product on
 for the Hermitian
      product on  . We define
. We define  ,
      where
,
      where  is the orthogonal projection of
 is the orthogonal projection of  on
 on  . From
. From  ,
      it follows that
,
      it follows that  , and we denote
, and we denote  .
      Since
.
      Since  is connected, the function
 is connected, the function  may actually be analytically continued to a multivalued
      function
 may actually be analytically continued to a multivalued
      function  . After choosing branch cuts (the way
      this is done is not crucial for what follows, provided that we make the
      standard choice for
. After choosing branch cuts (the way
      this is done is not crucial for what follows, provided that we make the
      standard choice for  with
 with  ),
      this allows us to extend the definitions of
),
      this allows us to extend the definitions of  and
 and
       to the case when
 to the case when  .
.
    
    
 and
 and  be such that
 be such that
    
     .
.
       generates
 generates  .
.
       whenever
 whenever  is another
        such generator.
 is another
        such generator.
      
      Let  be the prime-decomposition of the order of
 be the prime-decomposition of the order of
       modulo
 modulo  , with
, with  . Let
. Let  and
 and  for all
      for all  . Let
. Let  be the
      subgroup of
 be the
      subgroup of  of elements which commute with
 of elements which commute with  modulo
 modulo  , so that
, so that  . For
. For  , we represent elements in the
      quotient
, we represent elements in the
      quotient  by elements in the orbit of the action
 by elements in the orbit of the action
       modulo
 modulo  . Since
. Since  , the set
, the set  is a Lie algebra whose
      normalizer contains
 is a Lie algebra whose
      normalizer contains  . Consequently,
. Consequently,  , and we represent elements in
, and we represent elements in  by
      products
 by
      products  , with
, with  and
 and  . The elements in
. The elements in  are
      represented in a similar way as the elements in
 are
      represented in a similar way as the elements in  ,
      using recursion. The successive matrices
,
      using recursion. The successive matrices  for
 for
       ,
,  , etc. will
      be called a basis for
, etc. will
      be called a basis for  . A basis
. A basis  is said to be sorted if
 is said to be sorted if  .
.
    
 be a sorted basis for
      be a sorted basis for  and assume that we want to
      compute the extension
 and assume that we want to
      compute the extension  of
 of  by a new matrix
      by a new matrix  . Whenever we hit an element
. Whenever we hit an element  with
 with  during our computations,
      then we start the process of basis reduction, which is described below.
      Whenever we find an element in
 during our computations,
      then we start the process of basis reduction, which is described below.
      Whenever we find an element in  , then we abort
      all computations and return this element (indeed, in that case, we may
      continue with the closure of the connected component in Closure).
, then we abort
      all computations and return this element (indeed, in that case, we may
      continue with the closure of the connected component in Closure).
    
    
      Let  ,
,  , etc.
      be as above. We start by computing the orbit of
, etc.
      be as above. We start by computing the orbit of  modulo
      modulo  for
 for  . Whenever we
      hit an element
. Whenever we
      hit an element  (modulo
 (modulo  )
      with
)
      with  or
 or  , then we start
      the process of basis reduction. Otherwise, we obtain a finite orbit,
      together with a finite number of matrices by which we have to extend
, then we start
      the process of basis reduction. Otherwise, we obtain a finite orbit,
      together with a finite number of matrices by which we have to extend
       . We keep doing this using the same method for
. We keep doing this using the same method for
       until
 until  .
.
    
      At the end, we still have to show how to extend  with a new matrix
      with a new matrix  . Now recursive application of
      the algorithm to
. Now recursive application of
      the algorithm to  and
 and  yields a sorted basis
      yields a sorted basis  . When keeping track of the
      corresponding powers of
. When keeping track of the
      corresponding powers of  during the computations,
      we also obtain a finite system of generators for
 during the computations,
      we also obtain a finite system of generators for  .
      Using g.c.d. computations we either obtain a minimal
      generator
.
      Using g.c.d. computations we either obtain a minimal
      generator  or a new element in the connected
      component. In the first case, we return
 or a new element in the connected
      component. In the first case, we return  if
 if  and apply basis reduction otherwise.
 and apply basis reduction otherwise.
    
 be in
 be in
       with
 with  . We call
. We call  a raw basis. In the above algorithms, raw bases
      occur when we are given an ordered basis
 a raw basis. In the above algorithms, raw bases
      occur when we are given an ordered basis  for
 for
       , and we find a new element
, and we find a new element  with
      with  .
.
    
    
      Using the above base extension procedure, we may transform a raw basis
       into a basis for
 into a basis for  :
      starting with
:
      starting with  , we successively add
, we successively add  . However, it is more efficient to reduce
. However, it is more efficient to reduce  first. More precisely, let us now describe a procedure which tries to
      replace
      first. More precisely, let us now describe a procedure which tries to
      replace  by a better raw basis
 by a better raw basis  ,
      with
,
      with  , and whose elements are closer to identity.
      Of course, we may always return the original basis if a better one could
      not be found.
, and whose elements are closer to identity.
      Of course, we may always return the original basis if a better one could
      not be found.
    
      We first test whether all basis elements are roots of unity modulo  . If not, then we found a new element in the connected
      component. We next test whether there exist
. If not, then we found a new element in the connected
      component. We next test whether there exist  with
 with
       , in which case we keep adding the smallest such
      commutator to the basis. Whenever this stops, we write
, in which case we keep adding the smallest such
      commutator to the basis. Whenever this stops, we write  with
      with  and consider all lattice reductions
 and consider all lattice reductions  proposed by the LLL-algorithm in the commutative
      vector space
 proposed by the LLL-algorithm in the commutative
      vector space  . Whenever
. Whenever  ,
      for one such reduction, then we perform the corresponding reduction
,
      for one such reduction, then we perform the corresponding reduction  on our basis and keep repeating the basis reduction
      process.
 on our basis and keep repeating the basis reduction
      process.
    
 is not included in the
      connected component
 is not included in the
      connected component  of the normalizer of
 of the normalizer of  in
 in  . In that case, we start
      with the computation of a basis for
. In that case, we start
      with the computation of a basis for  , using
      linear algebra. Since
, using
      linear algebra. Since  is a normal subgroup of
 is a normal subgroup of
       , we have
, we have  . Now we have
      explained above how to compute with elements in
. Now we have
      explained above how to compute with elements in  .
      If
.
      If  , then may use recursion for computations in
      the finite group
, then may use recursion for computations in
      the finite group  . If
. If  ,
      then elements in
,
      then elements in  have necessarily small order,
      so we simply list the elements of
 have necessarily small order,
      so we simply list the elements of  .
.
    
    We hope that we provided convincing evidence that analytic methods may be used for the efficient computation of differential Galois groups and related problems like the factorization of linear differential operators.
The two main remaining challenges are the concrete implementation of the algorithms presented here (as part of a more general library for the computation with analytic functions such as [vdHea05]) and the development of a priori or a posteriori methods for ensuring the correctness of the computed result. Some ideas into this direction are as follows:
 .
        In particular, it is useful to compute the radical (or unipotent
        radical) of
.
        In particular, it is useful to compute the radical (or unipotent
        radical) of  , thereby reducing the study of
, thereby reducing the study of
         to the study of a finite group, a semisimple
        (or reductive) group and a solvable (or unipotent) group [Hum81,
        page 125]. We refer to [dG00] for computational aspects
        of the corresponding Lie algebras.
 to the study of a finite group, a semisimple
        (or reductive) group and a solvable (or unipotent) group [Hum81,
        page 125]. We refer to [dG00] for computational aspects
        of the corresponding Lie algebras.
       helps to further restrict the number of
        monomials (i.e. “generalized exponents”) to
        be considered. Indeed, if
 helps to further restrict the number of
        monomials (i.e. “generalized exponents”) to
        be considered. Indeed, if  is an arbitrary
        algebraic subgroup of
 is an arbitrary
        algebraic subgroup of  , for which the ring of
        invariants is easy to compute, then the invariants for
, for which the ring of
        invariants is easy to compute, then the invariants for  must be searched in this ring. Also, their are known
        algorithms for computing the invariants for certain types of algebraic
        groups, like linearly reductive groups [Der99].
 must be searched in this ring. Also, their are known
        algorithms for computing the invariants for certain types of algebraic
        groups, like linearly reductive groups [Der99].
       we
        used in section 4 is efficient for computations (we
        merely do linear algebra in dimension
 we
        used in section 4 is efficient for computations (we
        merely do linear algebra in dimension  , lattice
        reduction and computations with small finite groups). Nevertheless, it
        may be interesting to reconstruct the algebraic equations for
, lattice
        reduction and computations with small finite groups). Nevertheless, it
        may be interesting to reconstruct the algebraic equations for  and search for equations which are particularly
        sparse with respect to suitably chosen coordinates. For instance, a
        big cyclic group admits a particularly nice (resp. large)
        Gröbner basis w.r.t. well chosen (resp.
        badly chosen) coordinates. Conversely, it may be interesting to switch
        back from a Gröbner basis representation to our representation.
 and search for equations which are particularly
        sparse with respect to suitably chosen coordinates. For instance, a
        big cyclic group admits a particularly nice (resp. large)
        Gröbner basis w.r.t. well chosen (resp.
        badly chosen) coordinates. Conversely, it may be interesting to switch
        back from a Gröbner basis representation to our representation.
       , we may then reliably
        conclude that a Stokes matrix of the form
, we may then reliably
        conclude that a Stokes matrix of the form  generates the group
        generates the group  .
.
       of solutions by the action of the Galois
        group. For instance, if
 of solutions by the action of the Galois
        group. For instance, if  and
 and  are close regular points in
        are close regular points in  , is it true that
        the orbit of
, is it true that
        the orbit of  under the action of the Galois
        group necessarily contains a point in
 under the action of the Galois
        group necessarily contains a point in  ? This is
        clearly the case for finite Galois groups and the full Galois group,
        as well as for the equations
? This is
        clearly the case for finite Galois groups and the full Galois group,
        as well as for the equations  and
 and  . More generally, as soon as
. More generally, as soon as  becomes more transcendental, its orbit under the action of the Galois
        group becomes larger, so the likelihood of finding a point in the
        intersection with
        becomes more transcendental, its orbit under the action of the Galois
        group becomes larger, so the likelihood of finding a point in the
        intersection with  increases.
 increases.
      Besides the above ideas for improving the algorithms, this paper also raises a few other interesting questions:
 in section 3.4, both in the
        cases when
 in section 3.4, both in the
        cases when  and when
 and when  is
        more general? Also, as pointed out above, we may want to reconstruct
        equations for
 is
        more general? Also, as pointed out above, we may want to reconstruct
        equations for  from the variety.
 from the variety.
      
      Of course, a better mastering of the algorithms in this paper may also
      lead to more efficient algorithms for other computations which rely on
      differential Galois theory, like the computation of Liouvillian or other
      forms of solutions. More generally, our algorithms may be used for other
      computations with algebraic matrix groups over  and other fields of characteristic
      and other fields of characteristic  . We also
      expect all results to generalize to holonomic systems of linear partial
      differential equations.
. We also
      expect all results to generalize to holonomic systems of linear partial
      differential equations.
    
[BB85] D. Bertrand and F. Beukers. équations différentielles linéaires et majorations de multiplicités. Ann. Sci. de l'École Norm. Sup., 28(4-5):473–494, 1985.
[Bek94] E. Beke. Die Irreduzibilität der homogenen linearen Differentialgleichungen. Math. Ann., 45, 1894.
[Ber95] D. Bertrand. Minimal heights and polarizations on group varieties. Duke Math. J., 80(1):223–250, 1995.
[Bor91] A. Borel. Linear algebraic groups. Springer-Verlag, 2nd edition, 1991.
[Bra91] B.L.J. Braaksma. Multisummability and Stokes multipliers of linear meromorphic differential equations. J. Diff. Eq, 92:45–75, 1991.
[Bra92] B.L.J. Braaksma. Multisummability of formal power series solutions to nonlinear meromorphic differential equations. Ann. Inst. Fourier de Grenoble, 42:517–540, 1992.
[CC90] D.V. Chudnovsky and G.V. Chudnovsky. Computer algebra in the service of mathematical physics and number theory (computers in mathematics, stanford, ca, 1986). In Lect. Notes in Pure and Applied Math., volume 125, pages 109–232, New-York, 1990. Dekker.
[Clu04] T. Cluzeau. Algorithmique modulaire des équations différentielles linéaires. PhD thesis, Université de Limoges, 2004.
[CNP93] B. Candelberger, J.C. Nosmas, and F. Pham. Approche de la résurgence. Actualités mathématiques. Hermann, 1993.
[CS98] É. Compoint and M. Singer. Computing Galois groups of completely reducible differential equations. JSC, 11:1–22, 1998.
[Der99] H. Derksen. Computation of reductive group invariants. Adv. in Math., 141:366–384, 1999.
[dG00] W. de Graaf. Lie Algebras: Theory and Algorithms, volume 56 of North Holland Mathematical Library. Elsevier science, 2000.
[Dix71] J.D. Dixon. The structure of linear groups. Van Nostrand Reinhold Company, 1971.
[DJK03] H. Derksen, E. Jaendel, and P. Koiran. Quantum automata and algebraic groups. Technical Report 2003-39, LIP, École Norm. Sup. de Lyon, 2003. Presented at MEGA 2003; to appear in JSC.
[É85] J. Écalle. Les fonctions résurgentes I–III. Publ. Math. d'Orsay 1981 and 1985, 1985.
[É87] J. Écalle. L'accélération des fonctions résurgentes (survol). Unpublished manuscript, 1987.
[É92] J. Écalle. Introduction aux fonctions analysables et preuve constructive de la conjecture de Dulac. Hermann, collection: Actualités mathématiques, 1992.
[É93] J. Écalle. Six lectures on transseries, analysable functions and the constructive proof of Dulac's conjecture. In D. Schlomiuk, editor, Bifurcations and periodic orbits of vector fields, pages 75–184. Kluwer, 1993.
[Fab85] E. Fabry. Sur les intégrales des équations différentielles linéaires à coefficients rationnels. PhD thesis, Paris, 1885.
[Hum81] J.E. Humphreys. Linear algebraic groups. Graduate Texts in Math. Springer, 1981.
[Kap57] I. Kaplansky. An introduction to differential algebra. Hermann, 1957.
[Kol73] E.R. Kolchin. Differential algebra and algebraic groups. Academic Press, New York, 1973.
[LLL82] A.K. Lenstra, H.W. Lenstra, and L. Lovász. Factoring polynomials with rational coefficients. Math. Ann., 261:515–534, 1982.
[Mas88] D. Masser. Linear relations on algebraic groups. In A. Baker, editor, New advances in transendence theory, pages 248–262. Cambridge Univ. Press, 1988.
[Men96] F. Menous. Les bonnes moyennes uniformisantes et leur application à la resommation réelle. PhD thesis, Université Paris-Sud, France, 1996.
[Mit96] C. Mitschi. Differential Galois groups of confluent generalized hypergeometric equations: an approach using stokes multipliers. Pac. J. Math., 176(2):365–405, 1996.
[MO95] Scott H. Murray and E. A. O'Brien. Selecting base points for the Schreier-Sims algorithm for matrix groups. J.S.C., 18:577–584, 1995.
[Moe73] R. Moenck. Fast computation of gcds. In Proc. of the 5th ACM Annual Symposium on Theory of Computing, pages 142–171, New York, 1973. ACM Press.
[MR91] J. Martinet and J.-P. Ramis. Elementary acceleration and multisummability. Ann. Inst. Henri Poincaré, 54(4):331–401, 1991.
[PW02] Victor Y. Pan and Xinmao Wang. Acceleration of euclidean algorithm and extensions. In Teo Mora, editor, Proc. ISSAC '02, pages 207–213, Lille, France, July 2002.
[Ram85] J.-P. Ramis. Phénomène de Stokes et filtration Gevrey sur le groupe de Picard-Vessiot. Notes aux CRAS, 301(I/5):165–167, 1985.
[Rit50] J.F. Ritt. Differential algebra. Amer. Math. Soc., New York, 1950.
[Sch95] L. Schlesinger. Handbuch der Theorie der linearen Differentialgleichungen, volume I. Teubner, Leipzig, 1895.
[Sch97] L. Schlesinger. Handbuch der Theorie der linearen Differentialgleichungen, volume II. Teubner, Leipzig, 1897.
[Sim70] C.C. Sims. Computational problems in abstract algebra, chapter Computational methods in the study of permutation groups, pages 169–183. Pergamon press, Oxford, 1970.
[Sim71] C.C. Sims. Determining the conjugacy classes of permutation groups. In G. Birkhoff and M. Hall Jr., editors, Computers in algebra and number theory, volume 4 of Proc. A.M.S., pages 191–195, New York, 1971. A.M.S.
[SU93] M. Singer and F. Ulmer. Galois groups of second and third order linear differential equations. J.S.C., 16:9–36, 1993.
[vdH99] J. van der Hoeven. Fast evaluation of holonomic functions. TCS, 210:199–215, 1999.
[vdH01a] J. van der Hoeven. Complex transseries solutions to algebraic differential equations. Technical Report 2001-34, Univ. d'Orsay, 2001.
[vdH01b] J. van der Hoeven. Fast evaluation of holonomic functions near and in singularities. JSC, 31:717–743, 2001.
[vdH02] J. van der Hoeven. Relax, but don't be too lazy. JSC, 34:479–542, 2002.
[vdH03] J. van der Hoeven. Majorants for formal power series. Technical Report 2003-15, Université Paris-Sud, Orsay, France, 2003.
[vdH04] J. van der Hoeven. Computations with effective real numbers. Technical Report 2004-02, Université Paris-Sud, Orsay, France, 2004. To appear in TCS.
[vdH05a] J. van der Hoeven. Effective complex analysis. J.S.C., 39:433–449, 2005.
[vdH05b] J. van der Hoeven. Efficient accelero-summation of holonomic functions. Technical Report 2005-54, Université Paris-Sud, Orsay, France, 2005. Submitted to JSC.
[vdHea05] J. van der Hoeven et al. Mmxlib: the standard library for Mathemagix, 2002–2005. http://www.mathemagix.org/mmxweb/web/welcome-mml.en.html.
              [vdP95]  M. van der Put.
              Differential equations modulo  .
              Compositio Mathematica, 97:227–251, 1995.
.
              Compositio Mathematica, 97:227–251, 1995.
            
[vdPS03] M. van der Put and M. Singer. Galois Theory of Linear Differential Equations, volume 328 of Grundlehren der mathematischen Wissenschaften. Springer, 2003.
[vH96] M. van Hoeij. Factorization of linear differential operators. PhD thesis, Univ. of Nijmegen, The Netherlands, 1996.
[vH97] M. van Hoeij. Factorization of differential operators with rational functions coefficients. J.S.C., 24:537–561, 1997.
[vHW97] M. van Hoeij and J.A. Weil. An algorithm for computing invariants of differential Galois groups. J. Pure Appl. Algebra, 117–118:353–379, 1997.