> <\body> <\version-both> <\hide-preamble> >> \; <|version-both> <\hide-preamble> >> \; > <\doc-data||>|> \; |<\author-affiliation> Laboratoire d'informatique, UMR 7161 CNRS Campus de l'École polytechnique 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau |>>| \; >> The class of algorithms was introduced recently as a new approach towards . Starting with Hermite reduction of rational functions, various reductions have been introduced for increasingly large classes of holonomic functions. In this paper we show how to construct reductions for general holonomic functions, in the purely differential setting. |<\abstract-acm> I.1.2 Algebraic algorithms ||> Let > be an effective field of characteristic zero and \\> a non-zero polynomial. Consider the system of differential equations <\eqnarray*> *y>||>>>> where \r>> is an r> matrix with entries in > and is a column vector of unknown functions. Notice that any system of differential equations =B*y> with \> can be rewritten in this form by taking > to be a multiple of all denominators. Let be a formal solution to() and consider the |]>>-module > of linear combinations *y=\*y+\+\*y> where \\|]>r>> is a row vector. Then > has the natural structure of a D-module for the derivation > defined by *y|)>=+\*\*A|)>*y>. A >-linear mapping |]>:\\\> is said to be a if -f\Im \> for all \>. Such a reduction is said to be if its image is a finite dimensional subspace of > and if |]>=0> for all \>. In this paper, we will show how to construct confined reductions. Such reductions are interesting for their application to creative telescoping, as we briefly recall in section. The first reduction of this kind is Hermite reduction, in which case . The existence of normal confined reductions has been shown in increasingly general cases and most noticeably for Fuchsian differential equations. We refer to for more details and the application to creative telescoping. Our construction of confined reductions proceeds in two stages. In section, we first focus on the >-submodule >> of > of linear combinations *y> with \\r>>. We will construct a>-linear |\|\>:\>\\>> such that |f|\>-f\Im \> and |f|\>> is bounded from above for all \>>. Here we understand that *y|)>\deg \\max,\,deg \|)>> for all \\r>>. The construction uses a variant of Gaussian elimination that will be described in section. The head reduction may also be regarded as a way to reduce the valuation of in >, at the point at infinity. In section we turn to tail reductions, with the aim to reduce the valuation of at all other points in > and its algebraic closure |^>>. This is essentially similar to head reduction a change of variables, while allowing ourselves to work in algebraic extensions of >. In the last section, we show how to glue the head reduction and the tail reductions at each of the roots of > together into a global confined reduction on >. Using straightforward linear algebra and suitable valuation bounds, one can further turn this reduction into a normal one, as will be shown in detail in section. The valuation bounds that are required in section are proved in section. In this section we also prove degree and valuation bounds for so called head and tail choppers. The existence of head and tail choppers whose size is polynomial in the size of the original equation makes it possible to derive polynomial bounds for the complexity of creative telescoping: this follows from polynomial bounds for the dimension of |]>> and for the reduction of an elements in >. We intend to work out the details in a forthcoming paper. Let > be an effective subfield of > and let =\/\ x> and =\/\ u> denote the partial derivations with respect to and . Consider a system of differential equations <\equation> *\ y>||>|*\ y>||>>>> where \\> is non-zero and \r>> are such that <\equation*> \*B|)>+\*A*B=\*A|)>+\*B*A. Setting =\>, the first part of() then becomes of the form(). Notice that any bivariate holonomic function is an entry of a solution to a system of the form(). Let be a complex analytic solution of the above system of equations and let > be the |]>>-module generated by the entries of . Notice that > is stable under both > and>. For any *y\\> with \\|]>r>> and any non-singular contour > in > between two points ,\\\>, we may consider the integral <\eqnarray*> >||>f*\ x,>>>> which defines a function in the single variable . It is natural to ask under which conditions is aholonomic function and how to compute a differential operator \|]>> with . The idea of is to compute a differential operator \|]>> and afunction =\ \> with \\> such that <\eqnarray*> >||.>>>> Integrating over >, we then obtain <\eqnarray*> >||> \|\ x>*\ x=\,u|)>-\,u|)>.>>>> If the contour > has the property that |)>=\|)>> for all \\> (where the equality is allowed to hold at the limit if necessary), then yields the desired annihilator with . In general, we need to multiply on the left with an annihilator of ,u|)>-\,u|)>>. Now assume that we have a computable confined reduction |]>:\\\>. Then the functions in the sequence , f|]>, f|]>,\> can all be computed and they belong to afinite dimensional >vector space . Using linear algebra, that means that we can compute a relation <\equation> K*+\+K* f|]>=*f+\+K*\ f|]>=0 with ,\,K\\>. Taking <\eqnarray*> ||+\+K*\>>|>||-\\ \,>>>> we thus obtain(). If the relation() has minimal order and the reduction |]>> is normal, then it can be shown that there exist no relations of the form() of order lower than . Let \r>> be a matrix and denote the -th row of by >>. Assuming that >\0>, its > is the smallest index with \0>. We say that is in if there exists a,r|}>> such that >\0,\,U>\0,U>=\=U>=0> and ,\>=0> for all i\k>. Notice that has rank in this case. An invertible matrix \r>> such that is in row swept form will be called a for . We may compute such a matrix using the routine below, which is really a variant of Gaussian elimination. Whenever we apply this routine to a matrix such that the truncated matrix > with rows >,\,U>,0,\,0>> is in row swept form, we notice that these first rows are left invariant by the row sweaping process. In other words, the returned row sweaper is of the form >|>|>|>>>>>>. If, in addition, the matrix has rank , then is of the form >|>|>|>>>>>>. <\named-algorithm|RowSweaper>> Id>, U> <\indent> ,j>=0> for all \i> and Let \i> be minimal such that ,j>\0> for some Swap the -th and >-th rows of and R>> > <\indent> ,\>\S,\>-v*R,\>*S>>, ,\>\R,\>-v*R,\>*R>> Let \> and \*\|]>r>>. We may regard as a Laurent polynomial with matrix coefficients \\r>>: <\eqnarray*> ||\>T*x.>>>> If 0>, then we denote \:T\0|}>> and \:T\0|}>>. For any \\>, we also denote > T|)>=x>*T|)>>. Setting <\eqnarray*> =\>|>|*T*A+T+i*x*T,>>>> the equation() implies <\eqnarray*> *T*y|)>>||*U*y,>>>> for any constant matrix \r>>. The matrix can also be regarded as a Laurent polynomial with matrix coefficients \\r>>. We say that is a for() if > is an invertible matrix. <\proposition> For all \\>, we have <\eqnarray*> > T|)>>||> \.>>>> <\proof> Setting >, =\> T> and =\|)>>, we have <\eqnarray*> >||*x>*T|)>*A+x>*T|)>+\*x-1>*T|)>+i*x-1>*T|)>>>|||>**T|)>*A+T|)>+|)>*x*T|)>|)>>>|||>*U|)>.>>>> In other words, =\> U>. <\proposition> Assume that \\> and that \r>> is invertible. Then <\enumerate-alpha> is a head chopper for )> if and only if >*T> is a head chopper for )>. is a head chopper for )> if and only if is a head chopper for )>. <\proof> Assume that is a head chopper for(). Setting =\> T> and =\|)>>, we have =\> U> and >=U|)>> is invertible. Similarly, setting =P*T> and =\|)>>, we have =P*U>, whence >=P*U> is invertible. The opposite directions follow by taking > and > in the roles of > and . Notice that the equations(\U) and Proposition generalize to the case when \*\|]>r>> for some arbitrary . Notice also that deg T+\>, where \max,-1|)>>. Given \> and\>,let <\eqnarray*> >||\*\|]>r>:deg T\d|}>>>|>||M:deg \\d+\-e|}>.>>>> It is easy to see that both > and > are |]>>-modules. Now consider a matrix \*\|]>r>> with rows >,\,T>\M> ordered by increasing degree >\\\deg T>>. Let >, let > be the matrix with rows >>*T>,\,\>>*T>>, and let be maximal such that >\d>. We say that is a>head annihilator> for() if the following conditions are satisfied: <\description> The rows of form a basis for the |]>>-module >; The matrix > is invertible; The first rows of -e>> are >-linearly independent. The matrix *x>*Id> is obviously a >-head annihilator. If , then we notice that implies that is a head chopper for(). The following proposition is also easily checked: <\proposition> For any \\>, we have <\eqnarray*> >>||>*M>>|,e>>||>*M.>>>> Moreover, is a >-head annihilator if and only if > T> is a ,e|)>>-head annihilator. <\proposition> Let be a >head annihilator for)>. Let > and be as in >\U> and denote >=rank-e>|)>>. Then there exists an invertible matrix \r>> of the form >|>|>|>>>>>> such that the last >> rows of -e>> vanish and such that is a >-head annihilator for)>. <\proof> Let >|>||>>>>> be the row sweaper for -e>> as computed by the algorithm from section. By construction, >=deg T>> for all k>. We claim that >=deg T>=d> for all k>. Indeed, if >\d>, then this would imply that |)>>=0>, which contradicts . From our claim, it follows that >\\\deg >> and is maximal with the property that >\d>. Since the first rows of and > coincide, the first rows of -e>> are >-linearly independent. This shows that is satisfied for. As to , let \\r>> be the invertible matrix with =J=>|>|>|>>>>>>. Then we notice that =*\>, whence =*N> is invertible. The rows of clearly form a basis for >, since is invertible. <\proposition> Let be a >head annihilator for)>. Let >, let >=rank-e>|)>>, and assume that the last >> rows of -e>> vanish. Let >> be the matrix with rows T|)>>,\, T|)>>,\>,T>+1,\>,\,T>>. Then >> is a >head annihilator for)>. <\proof> We have >>=deg T>-1\d> for all k>> and >>=deg T>=d> for all k>>. In particular, we have >>\\\deg T>>> and >> is maximal with the property that >>,\>\d>. Setting >=\>|)>>, we also observe that >>=\>|)>> for all k>>. Since -e>|)>=k>> and the last >> rows of -e>> vanish, the first >> rows of both -e>> and >-e-1>> are >-linearly independent. In other words, is satisfied for>>. As to , we observe that >|)>=\>, whence >|)>=N> is invertible. Let us finally show that >> forms a basis for the |]>>-module >. So let M>. Then M>, so > for some row matrix =\+\*\+\\\|]>r>>. Setting |)>>, we have d+\-e-1>, whence -e>=*U-e>=0>>. Since the first >> rows of -e>> are >-linearly independent and the last >> rows of -e>> vanish, we get |)>=0> for all k>>. Let |~>> be the row vector with |~>=\*\> for k>> and |~>=\> for k>>. By what precedes, we have |~>\\|]>r>> and >|)>+\+\>|)>>. Now we have >|)>=\>>|)>|)>=|~>>>|)>> for k>> and >|)>=|~>>>|)>> for k>>. In other words, |~>>|)>>, as desired. Propositions and allow us to compute >-head annihilators for() with arbitrarily large. Assuming that we have in for sufficiently large , this yields the following algorithm for the computation of a head chopper for(): <\named-algorithm|HeadChopper,A|)>>> \*Id,U\\> <\indent> > is invertible |)>> \> >\rank|)>>, \>>*\>|>||>>>>>>>> \*T,\*U|)>> <\proposition> Let >. Consider the value of at the beginning of the loop and after iterations. Then is a >-head annihilator. <\proof> We first observe that > throughout the algorithm. Let us now prove the proposition by induction over . The proposition clearly holds for . Assuming that the proposition holds for a given , let us show that it again holds at the next iteration. Consider the values of and at the beginning of the loop and after iterations. Let be maximal such that >\d>. From the induction hypothesis, it follows that the first rows of > are >-linearly independent, whence the matrix is of the form >|>|>|>>>>>>. Now Proposition implies that is still a >-head annihilator. Since the last >> rows of >> vanish, Proposition also implies that > is a>-head annihilator. This completes the induction. Notice also that >\k> is maximal with the property that |)>>,\>\d>. <\proposition> If the algorithm > does not terminate, then there exists a non zero row matrix \*\|]>|]>r>> with =0>. In particular, =0>. <\proof> Assume that > does not terminate. Let > be the value of at the beginning of the main loop after iterations. Also let > and > be the values of and > as computed during the >-th iteration. Let > be maximal such that ,\>\d\deg \>. Using the observation made at the end of the above proof, we have \k\\>, so there exist an index \\> and >\r> with =k>> for all e>. Furthermore, <\equation*> J=>>|>|>|>>>>>,\=>*\>|>||>>>>>>, and <\eqnarray*> >||*T|)>.>>>> Moreover, for e>, the row sweaper > is even of the form <\eqnarray*> >||>>>|>|>|>>>>>>>.>>>> By induction on \>, we observe that \\*\|]>r>>. For e>, we also have *T|)>>\e-e> for all k>>, again by induction. Consequently, *T-\*T|)>\e-e> for all -e>, which means that the sequence *T> converges to a limit *T>> in |]>|]>r>>. By construction, the first >> rows of >> are zero, its last >> rows have rank >>, and >|)>=0>. We conclude by taking to be the last row of >>. <\theorem> The algorithm > terminates and returns a head chopper for)>. <\proof> We already observed that > throughout the algorithm. If the algorithm terminates, then it follows that is indeed a head chopper for(). Assume for contradiction that the algorithm does not terminate and let \*\|]>|]>r>> be such that =0>. Let \r>> be afundamental system of solutions to the equation)>, where > is some differential field extension of |)>|)>> with constant field >. From =0> we deduce that =0>, whence \>. More generally, R|)>=0> whence R|)>*y|)>=0> and R|)>*y\\> for all \>. Since the space > has dimension over >, it follows that there exists apolynomial \\|]>> of degree at most in > such that *y=0> and \0>. Since is a fundamental system of solutions, we have 0>. This contradicts the existence of an element \\\> with*y=0>. Let be a head chopper for(). Replacing by T> if necessary, we may assume without loss of generality that \*\r>> and \\r>>. Let =deg U>. Writing with \*\r>> and \>, let > to be the set of \> for which =0> or >|)>=0>. For any \>, let <\eqnarray*> >||\\r>:\i\d,i-\\\\\=0|}>.>>>> If \> and \\>, then the matrix >\\r>> is invertible. We define the >-linear mapping :\\\>> by <\eqnarray*> |)>>||-*U>|)>*x*U.>>>> We indeed have |)>\\>, since *U>|)>*x*U=\*x+O|)>>. The mapping > also induces a mapping *y\\*y;\*y\\|)>*y> that we will still denote by >. Setting *U>>, the relation() yields <\equation*> -\|)>|)>*y=c*x*U*y=*T*y|)>. This shows that the mapping > is a reduction. If \> and \\>, then we have =\> and the identity map :\*y\\*y>> is clearly a reduction as well. Since compositions of reductions are again reductions, we also obtain a reduction >\\\\:\*y\\-1>*y> for each . Now let |\|\>:\r>*y\\r>*y>> be the unique mapping with |\*y|\>=>\\\\|)>*y|)>> for all \> and \\>. Then |\|\>> is clearly a reduction as well and it has a finite dimensional image |\|\>\\-1>>. For any \\r>>, we call |\*y|\>> the of*y>. The following straightforward algorithm allows us to compute head reductions: <\named-algorithm|HeadReduce|)>>> <\indent> >=0> for all \\\> > Let \\\> be maximal with >\0> \>*U>> \\-c*x*U> <\proposition> The routine > terminates and is correct. <\remark> It is straightforward to adapt so that it also returns the certificate \\*\r>> with *y-*y|)>\\-1>*y>. Indeed, it suffices to start with \0> and accumulate \\+c*x*T> at the end of the main loop. <\remark> The algorithm is not very efficient. The successive values of can be computed more efficiently in a relaxed manner. <\remark> The algorithm also works for matrices \\r>> with an arbitrary number of rows . This allows for the simultaneous head reduction of several elements in r>*y>, something that might be interesting for the application to creative telescoping. Head reduction essentially allows us to reduce the valuation in > of elements in > the subtraction of elements in \>. Tail reduction aims at reducing the valuation in > in asimilar way for any > in the algebraic closure |^>> of >. More precisely, let \>, \|^>> and \*|^>|)>|]>r>>. We may regard as a Laurent polynomial in > with matrix coefficients \|^>r>>: <\eqnarray*> ||\>T*|)>.>>>> If 0>, then we denote its valuation in > by > T=min\:T\0|}>>. Setting <\eqnarray*> =\>>|>|*T*A+T+i*|)>*T,>>>> the equation() implies <\eqnarray*> |)>*T*y|)>>|||)>*U*y,>>>> for any matrix |^>r>>. The matrix can also be regarded as a Laurent polynomial with matrix coefficients \|^>r>>. We say that is a >> for() if > U>> is an invertible matrix. In fact, it suffices to consider tail choppers at the origin: <\lemma> Let \*|^>|)>|]>r>>, where \|^>>. Define =T,i|)>>, |~>=\|)>> and =A|)>>. Then is a tail chopper at > for)> if and only if > is a tail chopper at for|~>*=*>. <\proof> Setting =\|)>>, we have =U|)>>. Consequently, > =val U> and > >=U U>>. There is also a direct link between head choppers and tail choppers at the change of variables x>. <\lemma> Let \*|^>|]>r>>. Setting =x>, we define |~>|)>=-x*\>, |)>=A> and ,i|)>=T>. Then is a tail chopper at for)> if and only if > is a head chopper for|~>*=*>. <\proof> Setting =\|)>>, we have <\eqnarray*> ,-i|)>>|||~>|)>*,-i|)>*|)>+ |\ >,-i|)>-i**,-i|)>>>|||*\*T*A-x*T-i*x*T>>|||**T*A+T+i*x*T|)>>>|||*U.>>>> Consequently, =val U+2> and >=U U>>. Finally, the matrix *Id> is a tail chopper at almost all points >: <\lemma> Let \|^>> be such that |~>|)>\0>. Then *Id> is a tail chopper for)> at >. <\proof> If |~>|)>\0> and *Id>, then() becomes |)>*\*Id+O|)>|)>> for \>. In particular, > U=-1> and >>=i*\|)>*Id> is invertible in |^>r>>. Now consider a monic square-free polynomial \\> and assume that we wish to compute a tail chopper for() at a root > of > in |^>>. First of all, we have to decide how to conduct computations in|^>>. If > is irreducible, then we may simply work in the field =\/|)>> instead of |^>> and take> to be the residue class of , so that > becomes a generic formal root of >. In general, factoring> over > may be hard, so we cannot assume> to be irreducible. Instead, we rely on the well known technique of . For convenience of the reader, let us recall that dynamic evaluation amounts to performing all computations as if > were irreducible and =\/|)>> were a field with an algorithm for division. Whenever we wish to divide by a non-zero element > (with \>) that is not invertible, then |)>> provides us with a non trivial factor of >. In that case, we launch an exception and redo all computations with |)>> or /gcd|)>> in the role of >. So let \\> be a formal root of > and define =|)>>, |)>=y>, |~>|)>=-|)>*\> and|)>=A>. Let ,i|)>> be a head chopper for the equation |~>*=*>, as computed using the algorithm from section. Then>=,-i|)>>> is a tail chopper at > by Lemmas and. Let be a tail chopper for() at \|^>>. Let =|)>>, |)>=y>, |~>|)>=-|)>*\> and|)>=A> be as above, so that ,i|)>=T>, is a head chopper for the equation |~>*=*>. In particular, rewriting linear combinations *y> with \|^>|)>|]>r>> as linear combinations |~>*> with |~>\|^>|]>r>>, we may head reduce|~>*> as described in section. Let |~>\|^>|]>r>> be such that |~>*=||~>*|\>>. Then we may rewrite|~>*> as an element *y> of |^>|)>|]>r>*y>. We call *y> the of *y> at > and write *y=|\*y|\>>>. Let |~>> be the finite set of exceptional indices for the above head reduction and =-|~>>. Setting >> and =val> U>, it can be checked that the following algorithm computes the tail reduction at >: <\named-algorithm|TailReduce|)>>> <\indent> >=0> for all |)>\\> > Let |)>\\> be minimal with >\0> \>*U>> \\-c*|)>*U> In the particular case when <\equation> \=-L,A=|||>|>||>|>||||>|>|>|>|>>>>>,y=>>|>>|>>>>>>> for some operator \|]>> of order , the system() is equivalent to <\eqnarray*> ||>>>> Given a general system(), there always exists an element \*y> such that ,f>> are >-linearly independent. Such an element is called a and, with respect to the basis ,f>> of *y>, the equation() transforms into an equation of the form(). For efficient algorithms to compute cyclic vectors, we refer to. In the remainder of this section, we focus on systems() that are equivalent to(), with>, and as in(). Let us start with a quick review of some well known results about the asymptotic behaviour of solutions to() when \>. We define <\eqnarray*> >|||^>>|)>|)>>>|>||>*log> x>>>> to be the set of polynomials in whose coefficients are Puiseux series in >, together with the corresponding set of . The set > is asymptotically ordered by <\equation*> x>*log x\x>*log x\\\\\=\\i\\|)> and elements of > can be written as series \\>f>*\> with >\|^>>. We call \\:f>\0|}>> the of . If 0>, then the maximal element > of the support is called the of . We may also regard elements of > as series \\>f>*x>> in> with coefficients in |^>>. If 0>, then we denote by =val> f=max\\:f>\0|}>> the corresponding valuation of in >. Notice that we have =x>*log x> for some \>. Let >=\:a\0|}>>. We write |^>>>|]>> for the set of finite |^>>linear combinations of elements of >>>. The sets |^>>>|]>> and |^>>|]>> are defined likewise. Consider the formal exponential monomial group <\eqnarray*> >|||^>>*\|^>>>|]>>.>>>> It is well known that the equation() admits a basis ,\,h> of formal solutions of the form <\eqnarray*> >||*\,>>>> with \\>, \\> and such that the monomials >*\,\,\>*\> are pairwise distinct. We will write <\eqnarray*> >||>*\,\,\>*\|}>.>>>> Notice that this result generalizes to the case when \>|]>|]>> a straightforward change of variables x>> with \\>>. Let us now consider a linear differential operator |^>>|]>|]>>. Such an operator can also be expanded with respect to >; we denote by > the valuation of in > and by \|^>|]>> the coefficient of >> in this expansion. From the valuative point of view it is more convenient to work with linear differential operators |^>>|]>|]>>, where =x*\> is the Euler derivation. Such operators can be expanded with respect to > in a similar way and > and > are defined as above. For |^>>|]>|]>>, let >> be the corresponding operator in |^>>|]>|]>>. If has order , then <\equation*> v\v>|)>\v+r. For |^>>|]>|]>>, let >> be the corresponding operator in |^>>|]>|]>>. If has order , then <\equation*> v-r\v>|)>\v. Given \\>, we notice that its logarithmic Euler derivative \\ \/\> belongs to |^>>>|]>>. Let\>\|^>>|]>|]>> be the operator obtained from by substituting +\> for >. Then <\eqnarray*> \>>||*L|)>>>>> for all \>. If has order , then <\equation*> v+r*v|)>\v\>|)>\v-r*v|)>. We call \>> the of by >. Now consider an operator |^>>|]>|]>> and \>. If |)>=0>, then it can be shown that \\>. Otherwise, D|)>*x>>, whence =v+v>. In other words, <\eqnarray*> \\>|>|=v+v.>>>> More generally, if \> and \\>, then <\eqnarray*> *\\\>|>|*L|)>|)>=v\>|)>+v.>>>> Let <\eqnarray*> >||*\/\|)>:\*\\\,\\\,\\\\|}>.>>>> If |^>|]>>, then it can be shown using the Newton polygon method that \deg L-val L>. Let =-min-v|)>,1|)>> and notice that |)>\v-\> for all \|]>r>>. <\theorem> For any \|]>r>> and >, we have <\eqnarray*> -\\v>|>|+2*r*\+r+1.>>>> <\proof> Let > be atranscendental constant over > with \=0> and let =\|)>>. Then |)>\\|]>r>> satisfies |)>|)>=v>, |)>|)>=v>, and <\eqnarray*> >*T|)>*y|)>>||>*U|)>*y.>>>> We may rewrite |)>*y=K f> for the linear differential operator |)>+\+T|)>*\\\|]>|]>>. Notice that =v|)>|)>=v>. Similarly, we may rewrite |)>*y=H f> for some operator \|]>|]>> with =v|)>|)>=v>. Let ,\\\|]>|]>> be the operators given by =+\|)>*K>> and =H>>, so that |)>\v+r> and \v|)>>. By construction, <\eqnarray*> f>||*K> f+\*K> f>>|||>*\>*K> f|)>>>|||>*\>*T|)>*y|)>>>||||)>*y>>||| f.>>>> Since has order at most , there exists amonomial *\\\\\> with \\> and \\\>. Now the solutions of g=0> are spanned by the solutions to > g=0> and any particular solution to the inhomogeneous equation > g=x>>. In7> it is shown that the latter equation admits a so called distinguished solution x>*\>|)>|)>> with >*g|)>=-v>|)>> and >*g>*x>\\>. Since > is transcendental over >, it follows that *\\\\\>>. Let \\> be asolution to *\|)>=0> with >=\>. Then *\> satisfies <\eqnarray*> *\ f|)>>||\> \|)>\v\>|)>+v|)>-1,>>>> whereas <\eqnarray*> *\ f|)>>||\> \|)>=v\>|)>+v|)>.>>>> (The last equality follows from() and the fact that *\\\\\>>.) Now f=x*\ f> implies <\equation*> v|)>+r*\\v|)>-r*v| \|\>|>\v\>|)>\v\>|)>-1\v|)>+r*v| \|\>|>-1\v|)>-r*\-1, whence <\equation*> v\v|)>\v|)>+2*r*\+1\v+2*r*\+r+1. We already noticed before that \v-\>. We notice that our definition of > coincides with the one from section, since the degree in coincides with the opposite of the valuation in >. Theorem immediately yields a bound for the number of iterations that are necessary in in order to obtain a head chopper. More precisely: <\corollary> There exists a head chopper M> for )> with r+1+2*r*\+\>. <\proof> Let > be the head chopper as computed by and let be its last row. Then =v|)>> and \M,e>> where |)>-v|)>+\>. Now the theorem implies |)>\v+r+1+2*r*\>, whence r+1+2*r*\+\>. We conclude that >|)>> is a head chopper in >. Let >=\>, >=\>, and let >\\> be the smallest number for which there exists a head chopper with <\equation> val> \=val> T-\>+e>. We will call >> the of() at infinity. Collarary shows that >\r+1+2*r*\>+\>>. In a similar way, given \|^>>, let >=-min> A-val> \,-1|)>> and let >\\> be the smallest number for which there exists a tail chopper with <\equation> val> \>=val> T-\>+e>. We call >> the of() at >. Defining >> in a similar way as >>, but at the point >, one has >\r+1+2*r*\>+\>>. >> Let \|)>|)>r>> and *R*A+R>, so that =S*y>. Let >\\*,x|}>>. The proof technique from Theorem can also be used for studying > as a function of >: <\theorem> For all \|)>|)>r>> with \\>> and *R*A+R>, we have <\eqnarray*> -\\v>|>|+2*r*\+r+1.>>>> <\proof> Let > and rewrite *> and *> with |)>=0>. Then we have <\eqnarray*> **y|)>>||**y.>>>> We may rewrite *y=K f> and =H f> for some \|]>|]>> with =v|)>=0> and =v|)>>. Let ,\\\|]>|]>> be the operators given by =+i|)>*K>> and =H>>, so that v|)>=v>|)>\r> and |)>\v|)>>. In a similar way as in the proof of Theorem, we deduce that <\eqnarray*> f>|| f.>>>> Since has order at most , there exists amonomial *\\\\\> with \\> and \\\>. The distinguised solution to the equation > g=x> with >=\\|}>> has valuation =-i-v>|)>>, so that v\-i>. Since =\\\>>, it follows that \\>, whence *\\\\\>>. In a similar way as in the proof of Theorem, we obtain <\eqnarray*> |)>>|>||)>-2*r*\-1,>>>> whence <\equation*> v-v=v|)>-v|)>=v-v\v|)>-v|)>+r\2*r*\+r+1. The inequality \v-\> in the other direction is straightforward. <\corollary> We can compute an integer >\\> such that for all \|)>|)>r>> and *R*A+R> with > R\\>>, we have <\eqnarray*> > S>|>|> R-\>+e>.>>>> <\proof> Let be a head chopper for() such that > satisfies =v-\>+e>=0>. Let > be sufficiently small such that \\>> and > is defined and invertible for all \>. We take >=\-2*r*\-r-1>. Assume for contradiction that \\>> and \v-\>+e>>. Let > and > be such that *y=|S*y|\>> and *y|)>=*y>. By construction, -R|)>\v+\>-e>>, whence |)>=v>, and |)>\\>. But then |)>-v|)>=v|)>-v\\-\>=2*r*\+r+1>. This contradicts Theorem, since |)>\\>\\> implies >=x|)>>\\>>. <\corollary> Given \|^>>, we can compute an integer >\\> such that for all |^>|)>|)>r>> and *R*A+R> with > R\\>>, we have <\eqnarray*> > S>|>|> R-\>+e>.>>>> <\proof> Follows from the previous proposition a change of variables. Let us now study how head and tail reductions can be glued together into a global confined reduction on =\|]>r>*y>. More generally, we consider the case when =\|]>r>*y>, where \\> is a monic square-free polynomial such that > divides > for some \>. We assume that we have computed a head chopper for() and tail choppers >> for() at each the roots ,\,\>> of > in |^>>. In particular, we may compute the corresponding head and tail reductions. Given an element > of the Galois group of |^>> over >, we may also assume without loss of generality that the tail choppers were chosen such that |)>>=\>|)>> for all . Partial fraction decomposition yields |^>>linear mappings <\equation*> \>:|^>|]>r>\|)>*|^>|)>|]>r> and <\equation*> \>:|^>|]>r>\|^>r> with <\eqnarray*> >||>|)>+\>|)>+\+\>>|)>,>>>> for all \|^>|]>r>>. This allows us to define a global reduction *y|]>> of *y> by <\eqnarray*> *y|]>>|||\>|)>*y|\>+|\>|)>*y|\>>+\+|\>>|)>*y|\>>>.>>>> By our assumption that the tail choppers were chosen in a way that is compatible with the action of the Galois group, we have *y|]>\\|]>r>*y> whenever \\|]>r>>. Furthermore, the restriction of the reduction on |^>|]>r>*y> to |]>r>*y> is still a reduction. <\remark> It is plausible that computations in algebraic extensions can be avoided by combining tail choppers at conjugate roots under the action of the Galois group. However, we have not yet worked this idea out. Given theconfined reduction |]>:\\\> from section, let us show how to construct a normal confined reduction |\|\>:\\\>. For each \,\,\,\|}>>, assume that we have computed constants >> and >> with the property that for all *y,\*y\\> with >|)>\\>> and *y|)>=\*y>, we have >|)>\v>|)>+\>>. Consider the finite dimensional >-vector space |]>\:f\\|}>> and let >\min> \:\*y\|]>\|}>> for all \,\,\,\>|}>>. Let > be the >-subvector space of> of all *y\\> with > \\min>,\>|)>-\>> for all \,\,\,\>|}>>. This space is finite dimensional and our assumptions imply that we cannot have *y|)>\|]>> for *y\\\\>. In other words, for any \> with \|]>>, we have \>. Now let \ \\|]>> and let be a supplement of in |]>> so that |]>=V\W>. We may compute bases of and using straightforward linear algebra. The canonical >-linear projections :|]>\V> and :|]>\W>> with +\=Id> are also computable. We claim that we may take |f|\>\\|)>> for every \>. <\proposition> The mapping |\|\>:\\\;f\\|)>> defines a computable normal confined reduction on >. <\proof> The mapping |\|\>> is clearly a computable confined reduction on >. It remains to be shown that |f|\>=0> for all \>. Now |]>-f\\ \>, so |]>\\ \> and there exists a\> with =|]>>. Since \|]>>, it follows that \> and \\ \\|]>=V>. In other words, |]>=g\V> and |f|\>=\|]>|)>=0>. <\acknowledgments*> We would like to thank Pierre Lairez for a helpful remark on an earlier version of this paper. <\bibliography|bib|tm-plain|all> <\bib-list|13> G.D.Birkhoff. Singular points of ordinary differential equations. , 10:436\U470, 1909. A.Bostan, F.Chen, S. ChyzakZ.Li. Complexity of creative telescoping for bivariate rational functions. , 203\U210. New York, NY, USA, 2010. ACM. A.Bostan, F.ChyzakÉ.dePanafieu. Complexity estimates for two uncoupling algorithms. , ISSAC '13, 85\U92. New York, NY, USA, 2013. ACM. S.Chen. . , École Polytechnique, 2011. S.Chen, M.van Hoeij, M.KauersC.Koutschan. Reduction-based creative telescoping for Fuchsian D-finite functions. , ArXiv, 2016. . S.Chen, M.KauersC.Koutschan. Reduction-based creative telescoping for algebraic functions. , 175\U182. New York, NY, USA, 2016. ACM. F.Chyzak. . Habilitation, École polytechnique, 2014. J.Della Dora, C.DicrescenzoD.Duval. A new method for computing in algebraic number fields. G.GoosJ.Hartmanis, , 174, 321\U326. Springer, 1985. L.Dumont. . , École Polytechnique, 2016. J.vander Hoeven. Relax, but don't be too lazy. , 34:479\U542, 2002. J.vander Hoeven. , 1888. Springer-Verlag, 2006. P.Lairez. . , École polytechnique, Nov 2014. D.Zeilberger. The method of creative telescoping. , 11(3):195\U204, 1991. <\initial> <\collection> <\attachments> <\collection> <\associate|bib-bibliography> <\db-entry|+7BhgxM6ULkOFw6|article|Zeil91> <|db-entry> > <\db-entry|+7BhgxM6ULkOFvh|phdthesis|Chyz:hab> <|db-entry> > <\db-entry|+7BhgxM6ULkOFvb|inproceedings|BCCL10> <|db-entry> F. Z. > <\db-entry|+7BhgxM6ULkOFvf|phdthesis|Chen:phd> <|db-entry> > <\db-entry|+N2hDTL9a7kmitL|phdthesis|Lairez:phd> <|db-entry> > > <\db-entry|+7BhgxM6ULkOFvi|inproceedings|CKK16> <|db-entry> M. C. > <\db-entry|+7BhgxM6ULkOFvg|techreport|CHKK16> <|db-entry> M. van M. C. > > <\db-entry|+7BhgxM6ULkOFvl|phdthesis|Dum:phd> <|db-entry> > <\db-entry|+Li3PZLCHp1NMmM|article|vdH:relax> <|db-entry> > <\db-entry|+Li3PZLCHp1NMbh|inproceedings|DDD85> <|db-entry> C. D. > J. > <\db-entry|+VeLSxuvhf0wfZV|inproceedings|BCP13> <|db-entry> F. É. > <\db-entry|+Li3PZLCHp1NMZu|article|Birk09> <|db-entry> > <\db-entry|+Li3PZLCHp1NMmr|book|vdH:ln> <|db-entry> > <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Zeil91 Chyz:hab BCCL10 Chen:phd Lairez:phd CKK16 CHKK16 Chen:phd Dum:phd Dum:phd vdH:relax DDD85 BCP13 Birk09 vdH:ln <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Creative telescoping> |.>>>>|> |math-font-series||font-shape||3.Row swept forms> |.>>>>|> |math-font-series||font-shape||4.Head reduction> |.>>>>|> |4.1.Head choppers |.>>>>|> > |4.2.Head annihilators |.>>>>|> > |4.3.Computing head choppers |.>>>>|> > |4.4.Head reduction |.>>>>|> > |math-font-series||font-shape||5.Tail reduction> |.>>>>|> |5.1.Tail choppers |.>>>>|> > |5.2.Computing tail choppers |.>>>>|> > |5.3.Tail reduction |.>>>>|> > |math-font-series||font-shape||6.Degree and valuation bounds> |.>>>>|> |6.1.Cyclic vectors |.>>>>|> > |6.2.Formal transseries solutions |.>>>>|> > |6.3.Action of linear differential operators on transseries |.>>>>|> > |6.4.Degree and valuation bounds for head and tail choppers |.>>>>|> > |6.5.Valuation bounds for differentiation on |\> |.>>>>|> > |math-font-series||font-shape||7.Global reduction> |.>>>>|> |7.1.Gluing together the head and tail reductions |.>>>>|> > |7.2.Normalizing the reduction |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|>