> <\body> <\hide-preamble> > \; ||>|||<\author-affiliation> <\french> Laboratoire d'informatique, UMR 7161 CNRS Campus de l'École polytechnique 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau |>>> The class of algorithms was introduced recently as a new approach towards . Starting with Hermite reduction of rational functions, various reductions have been introduced for increasingly large classes of holonomic functions. In this paper we show how to construct reductions for general holonomic functions, in the purely differential setting. |<\abstract-acm> I.1.2 Algebraic algorithms ||> Let > be an effective field of characteristic zero and let \\> be a non-zero polynomial. Consider the system of linear differential equations <\eqnarray*> *y>||>>>> where \r>> is an r> matrix with entries in > and is a column vector of unknown functions. Notice that any system of linear differential equations =B*y> with \> can be rewritten in this form by taking > to be a multiple of all denominators. Let be a formal solution of() and consider the |]>>-module > of linear combinations *y=\*y+\+\*y> where \\|]>r>> is a row vector. Then > has the natural structure of a D-module for the derivation =\>:\\\;f\f> defined by <\equation*> *y|)>=+\*\*A|)>*y. A >-linear mapping |]>:\\\> is said to be a for() if \Im \> for all \>. Such areduction is said to be if its image is a finite dimensional subspace of > over> and if|]>=0> for all \>. In this paper, we propose a solution to the following problem: <\prob> Design an algorithm that takes the equation )> as input and that returns a(possibly normal) confined reduction |]>:\\\> for )>, in the form of an algorithm for the evaluation of|]>>. Confined reductions are interesting for their application to . After its introduction by Zeilberger in, the theory of creative telescoping has known a rapid development. For a brief history of the topic and further references, we point the reader to. In section we recall how confined reductions can be used for the computation of so-called telescopers; see also. It is worth to notice that Problem concerns univariate differential equations, whereas creative telescoping is a multivariate problem. Reduction-based algorithms have appeared recently as a particularly promising approach in order to make creative telescoping more efficient and to understand its complexity. The simplest kind of reduction is Hermite reduction, in which case and . More precisely21>, given \> with =1>, and writing >> for the square-free part of, there exist unique \> with deg b>> such that <\equation*> =>>+>|b>|)>. Then the Hermite reduction of \> is defined by =r/b>>. Confined reductions have been constructed in increasingly general cases: hyperexponential functions(see also), hypergeometric terms, mixed hypergeometric-hyperexponential terms, algebraic functions, multivariate rational functions, and Fuchsian differential equations. The existence of a reduction-based algorithm for general differential equations was raised as open Problem 2.2 in. Problem is essentially a more precise form of this problem, by specifying the space > on which the reduction acts. From a cohomological point of view, reductions can be regarded as an explicit way to exhibit elements in cokernels. An abstract proof of the fact that the cokernel of the derivation on > is finite-dimensional was given in. Our solution to Problem in particular yields a new constructive proof of thisfact. After section, we will leave applications to the theory of creative telescoping aside and focus on the construction of confined reductions. This construction proceeds in two stages. In section, we first consider the >-submodule >> of > of linear combinations *y> with \\r>>>. We will construct a>-linear |\|\>:\>\\>> such that |f|\>-f\Im \> and |f|\>> is bounded from above for all \>>. Here we understand that *y|)>\deg \\,\,deg \|)>>> for all \\r>>. The head reduction procedure relies on the computation of a using an algorithm that will be detailed in section. We also need a variant of row echelon forms that will be described in section. The head reduction may also be regarded as a way to reduce the valuation of in >, at the point at infinity. In section we turn to tail reductions, with the aim to reduce the valuation of at all other points in > and its algebraic closure |^>>. This is essentially similar to head reduction a change of variables, while allowing ourselves to work in algebraic extensions of >. In the last section, we show how to combine the head reduction and the tail reductions at each of the roots of > into a global confined reduction on >. Using straightforward linear algebra and suitable valuation bounds, one can further turn this reduction into a normal one, as will be shown in section. Our solution to Problem is made precise in Theorems, , and. As far as we aware of, these results are new, and provide a positive answer to2.2>. The application to creative telescoping is well known; see for instance. Some of the techniques that we use are similar to existing ones. First of all, the construction of head choppers bears some similarities with Abramov's EG-eliminations. Our procedure for head reduction is reminiscent of Euclidean division and classical algorithms for computing formal power series solutions to differential equations: first find the leading term and then continue with the remaining terms. In5>, a similar \Ppolynomial reduction\Q procedure has been described in the particular case when \deg A-1>. Finally, the idea to glue \Plocal reductions\Q together into a global one is also common in this area. Subsequently to the publication of a preprint version of this paper, the results have been further sharpened and generalized. In, an analogue algorithm was proposed for the case of higher order linear differential equations instead of first order matrix equations. This paper is mostly based on similar techniques, but also introduced a new tool: the . In the terminology of the present paper, this makes it possible to avoid introducing the formal parameter >, after which the operator > from section simply becomes multiplication with . Such simplifications make it easier to extend the theory beyond the setting of differential equations(): see for generalizations to difference equations. The original preprint version of this paper also contained degree and valuation bounds for head and tail choppers; one of our motivations was to use these to derive polynomial complexity bounds for creative telescoping. Using the Lagrange identity technique from, it is possible to prove even sharper bounds. We refer to the follow-up paper for more information on degree and valuation bounds and how to exploit them for proving polynomial complexitybounds. Let > be a subfield of >. An analytic function on some non-empty open subset of> is said to be (or ) over > if it satisfies a linear differential equation <\eqnarray*> *f>+\+L*f>||>>>> where ,\,L\\> are rational functions and \0>. Modulo multiplication by the common denominator, we may assume without loss of generality that ,\,L\\> are actually polynomials. Many, if not most, special functions are holonomic. Examples include , , , , Bessel functions, hypergeometric functions, polylogarithms, etc. Instead of higher order scalar equations such as(), it is also possible to consider first order linear differential systems <\eqnarray*> *y>||>>>> where \\> is a non-zero polynomial and \r>> an r> matrix with polynomial coefficients. Given a column vector ,\,y|)>> of analytic solutions to() on some non-empty open subset of >, it is well-known that each component > is a holonomic function. Conversely, taking =L> and <\eqnarray*> |||>||>|>||>|>||||>>|>|>|>|>>>>>,>>>> any solution to() corresponds to a unique solution ,\,f>|)>> of(). The concept of holonomy extends to multivariate functions. There are again two equivalent formalizations that are respectively based on higher order scalar equations and first order systems. Let us focus on the bivariate case and let =\/\ x> and =\/\ u> denote the partial derivatives with respect to and . Consider a system of linear differential equations <\equation> *\ y>||>|*\ y>||>>>> where \\> is non-zero and \r>> are such that <\eqnarray*> *B|)>+\*A*B>||*A|)>+\*B*A.>>>> A holonomic function in two variables is defined to be a component of a solution to such asystem. The compatibility relation() corresponds to the requirement \ y=\ \ y>, under the assumption that satisfies(). <\example> The vector function <\eqnarray*> ||||>>|>>>>>=*\>>>|*\>>>>>>>>>> satisfies the system() for =1> and <\equation*> A=|>||>>>>,B=|>||>>>>. Assume that > is an effective subfield of > and let be a complex analytic solution of(). By Cauchy-Kowalevski's theorem such solutions exist and can be continued analytically above \\\\=0|}>>. With =\>, let > be the |]>>-module generated by the entries of . Notice that > is stable under both > and>. For any *y\\> with \\|]>r>> and any non-singular contour > in > between two points ,\\\\|}>>, we may consider the integral <\eqnarray*> >||>f*\ x,>>>> which defines a function in the single variable . It is natural to ask under which conditions is aholonomic function and how to compute a differential operator \|]>> with . The idea of is to compute a differential operator \|]>> (called the ), an element \\> (called the ), and =\ \>, such that <\eqnarray*> >||.>>>> Integrating over >, we then obtain <\eqnarray*> >||> \|\ x>*\ x=\,u|)>-\,u|)>.>>>> If the contour > has the property that |)>=\|)>> for all \\> (where the equality is allowed to hold at the limit if necessary), then yields the desired annihilator with . In general, we need to multiply on the left with an annihilator of ,u|)>-\,u|)>>, as operators in the skew ring |]>>. <\example> With as in Example, we have =\*y\\*y>. The contour > that follows the real axis from > to > is non-singular and any function in > vanishes at the limits of this contour (for fixed ). In particular, taking >, the integral <\eqnarray*> >|>|>f*\ x=>>sin*\>*\ x>>>> is well defined for all . It can be checked that <\eqnarray*> f+*u*f>||*\ y,>>>> whence we may take +*u\\|]>> as our telescoper and =-*y\\> as our certificate. Integrating over >, it follows that <\eqnarray*> F+*u*F>||*y|]>>>=0.>>>> This equation admits a simple closed form solution <\eqnarray*> ||*\*u>,>>>> for some integration constant >. In general, the computation of such integration constants is adifficult problem that is well beyond the scope of this paper. For our particular example, wehave <\eqnarray*> >||=>>sin*\>*\ x=0,>>>> whence . We could have seen this more directly by observing that the integrand *\>> is an odd function in for all . On the other hand, a similar computation for>>and <\eqnarray*> >|>|>g*\ x=>>cos*\>*\ x>>>> leads to <\eqnarray*> g+*u*g>||*\ y,>>>> and <\equation*> G=c*\*u>,c=>>\>*\ x=>. We have shown how relations of the form() can be used for the computation of parametric integrals(). This leaves us with the question how to find such relations. Many different approaches have been proposed for this task and we refer to for a historical overview. From now on we will focus on the reduction-based approach, which is fairly recent and has shown to be particularly efficient for various subclasses of holonomic functions. Notice that the first equation *\ y=A*y> of the system() is of the form(), where we recall that =\>. Now assume that we have a computable confined reduction |]>:\\\>. Then the functions in the sequence , f|]>, f|]>,\> can all be computed and they belong to afinite dimensional >vector space . Using linear algebra, this means that we can compute a relation <\equation> K*+\+K* f|]>=*f+\+K*\ f|]>=0 with ,\,K\\> and \0>. Taking <\eqnarray*> ||+\+K*\>>|>||-\\ \,>>>> we thus obtain(). If the relation() has minimal order and the reduction |]>> is normal, then it can be shown16> that there exist no relations of the form() of order lower than . One important property of reduction-based telescoping is that it allows us to compute telescopers without necessarily computing the corresponding certificates. In practice, it turns out that certificates are often much larger than telescopers; this often explains the efficiency of the reduction-based approach. Notice that the above approach can easily be adapted to compute certificates as well, when really needed: it suffices to require that the reduction procedure > also produces a \\> with =\ \>. Given a relation() and \\> with =K f-=\ \>, we indeed have \>. <\example> Continuing Examples and, let us show how to compute a confined reduction|]>:\\\>. Given +Q*y\\=\*y\\*y>, our algorithm to compute> proceeds by induction on P,deg Q|)>>. If 0>, then we take =f>. Otherwise, we may write *x+O|)>> and *x+O|)>>. Setting <\eqnarray*> ||*P*x*y-*Q*x*y,>>>> we have <\eqnarray*> h>||*x+*u*Q*x-*P*x|)>*y+*x-*u*P*x-*Q*x|)>*y,>>>> whence <\equation> \f-\ h is of the form =*y+*y> with ,deg |)>\d-1>. By the induction hypothesis, we know how to compute |]>>, so we can simply take \|]>>. It is easily verified that |]>=\*y\\ y>, again by induction on , so the reduction is confined. Applying our reduction to the functions > and > from Example, we find that <\equation*> |||||||>|||>|>||>| f|]>>||*u*f>|| g|]>>||*u*g>>| f+*u*f|]>>|||| g+*u*g|]>>||>>>> and <\equation*> \ f+*u*f- f+*u*f|]>=-*\ g,\ g+*u*g- g+*u*g|]>=*\ f. This leads to the desired relations() and(). <\remark> In order to simplify the exposition, we have restricted our attention to the bivariate case. Nevertheless, the reduction-based approach extends to the case when is replaced by afinite number of coordinates ,\,u> and satisfies an equation *\> y=B y> with respect to each coordinate > (with suitable compatibility constraints). Indeed, for each ,p|}>>>, itsuffices to compute the sequence ,> f|]>,> f|]>,\> until we find a relation *f+\+K>*\> f|]>=0> with |K,\,K>|\>\\\\,\,u|)>>. For each ,p|}>>, this yields anontrivial operator \\>|]>>> with f\\ \>. Let \*\|)>|]>r>>, where > and are indeterminates. We view > as a parameter that takes integer values. We may regard as a Laurent polynomial with matrix coefficients \\|)>r>>>: <\eqnarray*> ||\>T*x.>>>> If 0>, then we denote \:T\0|}>>. Setting <\eqnarray*> =\>|>|*T*A+T+\*x*T,>>>> the equation() implies <\eqnarray*> >*T*y|)>>||>*U*y,>>>> for any constant matrix \|)>r>>. The matrix can also be regarded as a Laurent polynomial with matrix coefficients \\|)>r>>. We say that is a for() if > is an invertible matrix. <\example> With > and as in Example, the identity matrix > is a head chopper. Indeed, for this choice of , we obtain <\eqnarray*> ||*x*Id=|>||>>>>*x+|>||>>>>+>|>||>>>>>*,>>>> so and =-2*Id> is invertible. The matrix *x> is also a head chopper, with <\eqnarray*> |||>||>>>>*x+|>||>>>>*x++1>|>||+1>>>>>.>>>> <\example> Consider the equation() for =1> and <\eqnarray*> |||>|>|||>||>|>>>>,>>>> for some formal parameter >. Then we claim that <\eqnarray*> ||||||>|-x>|>|>|++5|\-2>*x+|)>*\-2*\+9|\-2>*x>|-+\+3|\-2>*x>|>>>>>>>>> is a head chopper. Indeed, a straightforward computation yields <\eqnarray*> ||+\+1>|*x>|>|-2|)>*x-\-1>||)>*x+-\+2|)>*x>|>>|+7*\+10|)>*x+|)>*\+|)>*\-2*\+9|\-2>>|+-2*\+5|)>*\+2*\-7*\+6|)>*x|\-2>>|-3|)>*\+2*\-9|)>*x|\-2>>>>>>>>|||||>||>|>|||-3|)>*\+2*\-9|\-2>>>>>>*x+O,>>>> which shows that the leading coefficient of as a polynomial in is (formally) invertible. Before studying the computation of head choppers, let us first show how they can be used for the construction of so-called \Phead reductions\Q, by generalizing the inductive construction from Example. Let be a head chopper for() and assume in addition that \*\|)>r>>> and \\|)>r>>. Given >-subvector spaces > and> of>, we say that a >-linear map:\\\> is a for() if \im \>> for all\>>. Let =deg U>. Writing with \*\|]>r>> and \|]>>, we say that \> is an if =0> or >|)>=0>. Here we understand that > stands for the evaluation of at =i> and similarly for >|)>=0>. We write > for the finite set of exceptional indices. If \>, then we notice that the matrix >\\r>> isinvertible. Any \\r>> can be regarded as a polynomial \>\*x\\r>> in . Given \>, let <\eqnarray*> >||\\r>\\e\d,e-\\\\\=0|}>.>>>> If \> and d-\\\>, then recall that the matrix >\\r>> isinvertible. We may thus define the >-linear mapping :\\\>> by <\eqnarray*> |)>>||-\*U>*x*U.>>>> We indeed have |)>\\>, since <\equation*> \*U>*x*U=\*x+O|)>. The mapping > also induces a mapping *y\\*y;\*y\\|)>*y> that we will still denote by>. Setting *U>>, the relation() yields <\equation*> -\|)>|)>*y=c*x*U*y=*T*y|)>. This shows that the mapping > is a partial reduction. If \> and d-\\\>, then we have =\> and the identity map :\*y\\*y>> is clearly a partial reduction as well. Now we observe that compositions of partial reductions are again partial reductions. For each \>, we thus have a partial reduction <\equation*> \>\\\\:\*y\\-1>*y. Now let |\|\>:\r>*y\\r>*y>> be the unique mapping with |\*y|\>=>\\\\|)>*y|)>> for all \> and \\>. Then |\|\>> is clearly a partial reduction as well and it admits a finite dimensional image |\|\>\\-1>>. For any \\r>>, we call |\*y|\>> the of*y>. The following straightforward algorithm allows us to compute head reductions: <\named-specified-algorithm|HeadReduce|)>>> \\r>> the head-reduction |\|\>\\r>> of > <|named-specified-algorithm> <\indent> >=0> for all \\\> > Let \\\> be maximal with >\0> \>*U>> \\-c*x*U> <\theorem> The routine > terminates and is correct. <\example> Let > and be as in Examples, , and. Taking the head chopper *x> with as in(), we get <\eqnarray*> |)>>||-\*U>*x*U>>||||>>>>->|>>>>>*|>||>>>>*x*|>||>>>>*x+|>||>>>>*x+|>||>>>>|)>>>||||>>>>-*x+*Q*x-*P*x>||*x-*P*x-*Q*x>>>>>>>>> for all =|>>>>> and \=2>. In other words, |)>*y> coincides with > from(), so the head reduction procedure coincides with the reduction procedure from Example, except that we directly reduce any *y\\2>*y> with \\=deg U=2> to itself (we have =\> and |\|\>=\*f\\*g\*f|)>*x\*g|)>*x>). The fact that we may actually reduce elements *y> with =1> somewhat further is due to the fact that the coefficient of > in() vanishes for=0>. Indeed, this means that the matrix from() actually evaluates to a polynomial in at =0>, so we may use it instead of the matrix from() as a head chopper. <\example> Let > and be as in Example, with =0>. Taking and as in() and(), weobtain =2>, =>, and <\eqnarray*> ||||||>|-x>|>|>|-+5|2>*x-+9|2>*x>|++3|2>*x>|>>>>>>>|||||>|||>|||+9|2>>>>>>*x+||>|-2>|+2>|>|+7*\+10|2>>|+5*\+6|2>>|>>>>*x++1>||>|-1>||>|+12*\+9|-2>>||>>>>.>>>> For 1>, we note that <\eqnarray*> >||3>\\3>*x\\\\3>*x.>>>> Let us show how and can be used to compute the head-chopper of >, where <\eqnarray*> >||||>>>>*x.>>>> Applying > to >, we find that is maximal with >=\\0>, so we set <\eqnarray*> |>|*U=||>>>>*||>||>|>>|||>>>>>=||>>>>.>>|>|>|-c*x*U=||>>>>*x+||>>>>*x.>>>> We repeat the loop for the new value of >, which yields <\eqnarray*> |>|>||>|*U=||>>>>*||>||>|>>|||>>>>>=||>>>>>>>|>|>|-c*x*U=||>>>>*x+||>>>>*x.>>>> Repeating the loop once more, we obtain <\eqnarray*> |>|>||>|*U=||>>>>*||>||>|>>|||>>>>>=||>>>>>>>|>|>|-c*U=>|>|>>>>*x+||>>>>.>>>> At this point, we have =1\\=2>, so we have obtained the head reduction of the original>. <\example> Let >, , , and be as in Example and \>. Taking <\equation*> \=,n=|\-3>, we observe that =>, whence any element of 3>*x> is head-reduced. Even for equations of bounded degree and order, this means that head reduced elements \\r>> can have arbitrarily large degree. <\remark> It is straightforward to adapt so that it also returns the certificate \\*\r>> with *y-*y|)>\\-1>*y>. Indeed, it suffices to start with \0> and accumulate \\+c*x*T> at the end of the main loop. <\remark> In the algorithm we never used the assumption that > has one row. In fact, the same algorithm works for matrices \\r>> with an arbitrary number of rows . This allows for the simultaneous head reduction of several elements in r>*y>, something that might be interesting for the application to creative telescoping. The computation of head choppers essentially boils down to linear algebra. We will rely on the concept of\Prow swept forms\Q. This notion is very similar to the more traditional row echelon forms, but there are a few differences that are illustrated in Example below. Let \s>> be a matrix and denote the -th row of by >>. Assuming that >\0>, its > is the smallest index with \0>. We say that is in if there exists a,r|}>> such that >\0,\,U>\0,U>=\=U>=0> and ,\>=0> for all i\k>. Notice that has rank in this case. An invertible matrix \r>> such that is in row swept form will be called a for . We may compute such a matrix using the routine below, which is really a variant of Gaussian elimination. Whenever we apply this routine to a matrix such that the truncated matrix > with rows >,\,U>>> is in row swept form, we notice that these first rows are left invariant by the row sweeping process. In other words, the returned row sweeper is of the form >|>|>|>>>>>>. If, in addition, the matrix has rank , then is of the form >|>|>|>>>>>>. <\named-specified-algorithm|RowSweeper>> a matrix \s>> a row sweeper \r>> for <|named-specified-algorithm> Id>, U> <\indent> ,j>=0> for all \i> and Let \i> be minimal such that ,j>\0> for some Swap the -th and >-th rows of and R>> > <\indent> ,\>\S,\>-v*R,\>*S>>, ,\>\R,\>-v*R,\>*R>> <\example> Given the matrix <\eqnarray*> ||||||>|||||>|||||>|||||>|||||>>>>,>>>> the algorithm produces the row sweeper <\eqnarray*> ||||0>|0>|>|||||>|2>||||>||1>||1>|>|2>|1>|||>>>>.>>>> Since the two first rows of were already in row swept form, this matrix is indeed of the form >|>|>|>>>>>>. The resulting row swept form for is <\eqnarray*> ||||||>|||||>|||||>|||||>|||||>>>>.>>>> The more traditional row echelon and reduced row echelon forms insist on moving the rows>> for which > is minimal to the top, so the first two rows are not left invariant. The different \Pnormal\Q forms that we obtain for our example matrix are shown below: <\equation*> Original matrix >>>|Row swept form>>>|Row echelon form>>>|>>>|||||>|||||>|||||>|||||>|||||>>>>>|||||>|||||>|||||>|||||>|||||>>>>>|||||>|||||>|||||>|||||>|||||>>>>>|||1>|0>|0>|1>|0>>|||||>|||||>|||||>|||||>>>>>>>>> In Example we already pointed out that head choppers are generally not unique. Let us now study some transformations that allow us to produce new head choppers from known ones; this will provide us with useful insights for the general construction of head choppers. For any \\>, we define the operator >> on *\|)>|]>r>> by <\eqnarray*> > T|)>|)>>||>*T+\|)>.>>>> <\proposition> For all \\>, we have <\eqnarray*> > T|)>>||> \.>>>> <\proof> Setting >, =\> T> and =\|)>>, we have <\eqnarray*> |)>>||*x>*T+\|)>*A+x>*T+\|)>+\*x-1>*T+\|)>+\*x-1>*T+\|)>>>|||>**T+\|)>*A+T+\|)>++\|)>*x*T+\|)>|)>>>|||>*U+\|)>.>>>> In other words, =\> U>. <\proposition> Assume that \\> and that \|)>r>> is invertible. Then <\enumerate-alpha> is a head chopper for )> if and only if >*T> is a head chopper for )>. is a head chopper for )> if and only if is a head chopper for )>. <\proof> Assume that is a head chopper for(). Setting =\> T> and =\|)>>, we have =\> U> and >|)>=U+\|)>> is invertible. Similarly, setting =P*T> and =\|)>>, we have =P*U>, whence >=P*U> is invertible. The opposite directions follow by taking > and > in the roles of > and . Given a head chopper \*\|)>|]>r>> for() and >, let \> be minimal such that \0> or \0>. Then is also maximal with the property that \\\\*\|)>r>> and =\|)>=\\\|)>r>>. From Proposition() it follows that > is again a head chopper for(). Without loss of generality, this allows us to make the additional assumption that \*\|)>r>> at the beginning of subsection. In order to compute head choppers by induction, it will be convenient to introduce a partial variant of this concept. First of all, we notice that the equations(\U) and Proposition generalize to the case when \*\|)>|]>r>>, where is arbitrary. Notice also that deg T+\>, where \max,-1|)>>. Given \> and\>,let <\eqnarray*> >||\*\|)>|]>r>\deg T\d|}>>>|>||M\deg \\d+\-e|}>.>>>> It is easy to see that both > and > are |)>|]>>-modules. Now consider a matrix \*\|)>|]>r>> with rows >,\,T>\M> ordered by increasing degree <\equation*> deg T>\\\deg T>. Let >, let > be the matrix with rows >>*T>,\,\>>*T>>, and let be maximal such that >\d>. We say that is a>head annihilator> for() if the following conditions are satisfied: <\description> The rows of form a basis for the |)>|]>>-module >; The matrix > is invertible; The first rows of -e>> are |)>>-linearly independent. The matrix *x>*Id> is obviously a >-head annihilator with . If , then we notice that implies that is a head chopper for(). We also have the following variant of Proposition(): <\proposition> For any \\>, we have <\eqnarray*> >>||>*M>>|,e>>||>*M.>>>> Moreover, is a >-head annihilator if and only if > T> is a ,e|)>>-head annihilator. Using a constant linear transforation as in Proposition(), we may now achieve the following: <\proposition> Let be a >head annihilator for)>. Let > and be as in >\U> and denote >=rank-e>|)>>. Then there exists an invertible matrix \|)>r>> of the form <\equation*> J=>|>|>|>>>>> such that the last >> rows of -e>> vanish and such that is a >-head annihilator for)>. <\proof> Let <\equation*> J=>|>||>>>> be the row sweeper for -e>> as computed by the algorithm from section. By construction, >=deg T>> for all k>, and the last >> rows of -e>> vanish. We claim that >=deg T>=d> for all k>. Indeed, if >\d>, then this would imply that |)>>=0>, which contradicts . From our claim, it follows that >\\\deg >> and is maximal with the property that >\d>. Since the first rows of and > coincide, the first rows of -e>> are |)>>-linearly independent. This shows that is satisfied for. As to , let \\|)>r>> be the invertible matrix with <\equation*> |)>=J-d|)>=>|>|-d|)>>|-d|)>>>>>>. Then we notice that =*\>, whence =*N> is invertible. The rows of clearly form a basis for >, since is invertible. As long as > is not invertible, we finally use the following simple but non-constant linear transformation in order to improve the rank of >: <\proposition> Let be a >head annihilator for)>. Let >, let >=rank-e>|)>>, and assume that the last >> rows of -e>> vanish. Let >> be the matrix with rows <\equation*> T|)>>,\, T|)>>,\>,T>+1,\>,\,T>. Then >> is a >head annihilator for)>. <\proof> We have >>=deg T>-1\d> for all k>> and >>=deg T>=d> for all k>>. In particular, we have >>\\\deg T>>> and >> is maximal with the property that >>,\>\d>. Setting >=\>|)>>, we also observe that >>=\>|)>> for all k>>. Since -e>|)>=k>> and the last >> rows of -e>> vanish, the first >> rows of both -e>> and >-e-1>> are |)>>-linearly independent. In other words, is satisfied for>>. As to , we observe that >|)>=\>, whence >|)>=N> is invertible. Let us finally show that >> forms a basis for the |)>|]>>-module >. So let M>. Then M>, so > for some row matrix =\+\*\+\\\|)>|]>r>>. Setting |)>>, we have d+\-e-1>, whence -e>=*U-e>=0>>. Since the first >> rows of -e>> are |)>>-linearly independent and the last >> rows of -e>> vanish, we get |)>=0> for all k>>. Let |~>> be the row vector with |~>=\*\> for k>> and |~>=\> for k>>. By what precedes, we have |~>\\|)>|]>r>> and >|)>+\+\>|)>>. Now we have >|)>=\>>|)>|)>=|~>>>|)>> for k>> and >|)>=|~>>>|)>> for k>>. In other words, |~>>|)>>, as desired. Propositions and allow us to compute >-head annihilators for() with arbitrarily large. Assuming that we have in for sufficiently large , this yields the following algorithm for the computation of a head chopper for(): <\named-specified-algorithm|HeadChopper,A|)>>> \\> and \r>> a head chopper \*\|)>|]>r>> for () <|named-specified-algorithm> \*Id,U\\> <\indent> > is invertible |)>> \> >\rank|)>>, \>>*\>|>||>>>>>>>> \*T,\*U|)>> <\example> Before we prove the correctness of , let us show how it works for> and as in Example. We enter the loop with <\equation*> T=||>|||>|||>>>>,U=|x>+x|\|0>||x>+2|1>|||x>+1>>>>|)>, so that is a >-head annihilator for)>. During the first iteration of the loop, we set <\equation*> U=||>|||>|||>>>>,J\||>|||>|||>>>>,T\||>|||>|||>>>>,U\-1|x>+x|\|0>||x>||x>-\+2|1>|||x>+1>>>>|)> and then <\equation*> k>\1,\\>||>|||>|||>>>>,T\>||>|||>|||>>>>,U\-1|x>+1||x>|0>||x>||x>-\+2|1>|||x>+1>>>>|)>. Propositions and imply that the new matrix is a >head annihilator for)>. The second iteration of the main loop yields <\equation*> U=||>||>|>||>|>>>>,J\||>|||>|||>>>>, <\equation*> T\>>||>|->>|>|>|>||>>>>,U\-2|x>||x>|0>||x>+|x>|2-\+-\-1|x>|1>||x>+|x>|-\|x>||x>>>>>|)>, after which we set >\2>, <\equation*> \\>||>||>|>|||>>>>,T\>>||>|->>|>|>|>||>>>>,U\+-2|x>||x>|0>||x>+|x>||x>+-\-1|x>|>||x>+|x>|-\|x>||x>>>>>|)>. At this point, is a >-head annihilator for)>. The third iteration of the main loop yields <\equation*> U=||>||>|>|>|-\-\>|>>>>>,J\||>|||>|>|+\|2-\>>|>>>>, <\equation*> T\>|0|0>|>-||0>|+2|-2|)>*x>++2|-2|)>*x>|-1-+\|-2|)>*x>|1>>>>|)>,U\||1>|-3|)>*\-\|\-2>>>>>|)>*+O>|)> and then >=3>, \\*Id>, <\equation*> T\>|0|0>|>->|>|0>|++1|-2|)>*x>++1|-2|)>*x>|-+\-1|-2|)>*x>|>>>>|)>,U\||1>|-3|)>*\-2*\+3|\-2>>>>>|)>*>+O>|)>. Applying > to both and , we find the head chopper from Example. <\proposition> Let >. Consider the value of at the beginning of the loop and after iterations. Then is a >-head annihilator. <\proof> We first observe that > throughout the algorithm. Let us now prove the proposition by induction over . The proposition clearly holds for . Assuming that the proposition holds for a given , let us show that it again holds at the next iteration. Consider the values of and at the beginning of the loop and after iterations. Let be maximal such that >\d>>. From the induction hypothesis, it follows that the first rows of > are |)>>linearly independent, whence the matrix is of the form <\equation*> J=>|>|>|>>>>>. Now Proposition implies that is still a >-head annihilator. Since the last >> rows of >> vanish, Proposition also implies that > is a>-head annihilator. This completes the induction. Notice also that >\k> is maximal with the property that |)>>,\>\d>. <\proposition> Assume (for contradiction in Theorem below) that the algorithm > does not terminate for some given input ,A|)>>. Then there exists a non-zero row matrix \*\|)>|]>|]>r>> with =0>. In particular, =0>>. <\proof> Assume that > does not terminate. Let > be the value of at the beginning of the main loop after iterations. Also let > and > be the values of and > as computed during the >-th iteration. Let > be maximal such that ,\>\d\deg \>. Using the observation made at the end of the above proof, we have \k\\>, so there exist an index \\> and >\r> with =k>> for all e>. Furthermore, <\equation*> J=>>|>|>|>>>>>,\=>*\>|>||>>>>>>, and <\eqnarray*> >||*T|)>.>>>> Moreover, for e>, the row sweeper > is even of the form <\eqnarray*> >||>>>|>|>|>>>>>>>.>>>> By induction on \>, we observe that \\*\|)>|]>r>>. For e>, we also have *T|)>>\e-e> for all k>>, again by induction. Consequently, *T-\*T|)>>\e-e> for all e>, which means that the sequence *T> formally converges to a limit *T>> in |)>|]>|]>r>>. By construction, the first >> rows of >> are zero, its last >> rows have rank >>, and >|)>=0>. We conclude by taking to be the last row of >>. <\theorem> The algorithm > terminates and returns a head chopper for)>. <\proof> We already observed that > throughout the algorithm. If the algorithm terminates, then it follows that is indeed a head chopper for(). Assume for contradiction that the algorithm does not terminate and let \*\|)>|]>|]>r>> be such that =0>. Let \r>> be afundamental system of solutions to the equation)>, where > is some differential field extension of |)>|)>|)>> with constant field |)>>. From =0> we deduce that =0>, whence \|)>>. More generally, R|)>=0> whence R|)>*y|)>=0> and R|)>*y\\|)>> for all \>. Since the space |)>> has dimension over |)>>, it follows that there exists apolynomial \\|)>|]>> of degree at most in > such that *y=0> and \0>. Since is a fundamental system of solutions, we have 0>. This contradicts the existence of an element \\\> with*y=0>. <\remark> In 6>, we proved a polynomial degree bound for the computed head chopper. Sharper bounds have been proven in and the complexity of reduction-based creative telescoping has been further investigated in. Note that one should not confuse the existence of polynomial degree bounds for head choppers with the absence of such bounds for exceptional indices. Indeed, Example shows how to obtain arbitrarily high exceptional indices for equations of bounded degree and order. Yet, the degrees of the corresponding head choppers are also bounded, as shown in Example. <\remark> As stated in the introduction, the construction of head choppers bears some similarities with Abramov's EG-eliminations. Let be an indeterminate and let n+1> be the shift operator. Then EG-eliminations can be used to compute normal forms for linear difference operators in r>>. The rank of the leading (or trailing coefficient) of the normal form is equal to the rank of the original operator. Abramov achieves such normal form computations by transforming the problem into a big linear algebra problem over >. Our algorithm for the computation of head choppers is different in two ways: the operator> is not a shift operator and we work directly over |)>|]>>. Head reduction essentially allows us to reduce the valuation in > of elements in > the subtraction of elements in \>. Tail reduction aims at reducing the valuation in > in asimilar way for any > in the algebraic closure |^>> of >. It is well known that holonomy is preserved under compositions with rational functions. Modulo suitable changes of variables, this allows us to compute tail reductions using the same algorithms as in the case of head reduction. For completeness, we will detail in this section how this works. More precisely, let \\>, \|^>> and \*|^>|)>|)>|]>r>>. We may regard as a Laurent polynomial in > with matrix coefficients \|^>|)>r>>: <\eqnarray*> ||\>T*|)>.>>>> If 0>, then we denote its valuation in > by > T=min\:T\0|}>>. Setting <\eqnarray*> =\>>|>|*T*A+T+\*|)>*T,>>>> the equation() implies <\eqnarray*> |)>>*T*y|)>>|||)>>*U*y,>>>> for any matrix |^>|)>r>>. The matrix can also be regarded as a Laurent polynomial with matrix coefficients \|^>|)>r>>. We say that is a >> for() if > U>> is an invertible matrix. In fact, it suffices to consider tail choppers at the origin: <\lemma> Let \*|^>|)>|)>|]>r>>, where \|^>>. Define |)>=T,\|)>>, |~>=\|)>> and =A|)>>. Then is a tail chopper at > for)> if and only if > is a tail chopper at for|~>*=*>. <\proof> Setting =\|)>>, we have =U|)>>. Consequently, > =val U> and > >=U U>>. There is also a direct link between head choppers and tail choppers at the change of variables x>. <\lemma> Let \*|^>|)>|]>r>>. Setting =x>, we define |~>|)>=-x*\>, |)>=A> and ,\|)>=T|)>>. Then is a tail chopper at for)> if and only if > is a head chopper for|~>*=*>. <\proof> Setting =\|)>>, we have <\eqnarray*> ,-\|)>>|||~>|)>*,-\|)>*|)>+ |\ >,-\|)>-\**,-\|)>>>|||*\*T|)>*A-x*T|)>-\*x*T|)>>>|||**T|)>*A+T|)>+\*x*T|)>|)>>>|||*U|)>.>>>> Consequently, =val U+2> and >|)>=U U>|)>>. Finally, the matrix *Id> is a tail chopper at almost all points >: <\lemma> Let \|^>> be such that |~>|)>\0>. Then *Id> is a tail chopper for)> at >. <\proof> If |~>|)>\0> and *Id>, then() becomes *|)>*\*Id+O|)>|)>> for \>. In particular, > U=-1> and >>=\*\|)>*Id> is invertible in |^>|)>r>>. Now consider a monic square-free polynomial \\> and assume that we wish to compute a tail chopper for() at a root > of > in |^>>. First of all, we have to decide how to conduct computations in|^>>. If > is irreducible, then we may simply work in the field =\/|)>> instead of |^>> and take> to be the residue class of , so that > becomes a generic formal root of >. In general, factoring> over > may be hard, so we cannot assume> to be irreducible. Instead, we rely on the well known technique of . For convenience of the reader, let us recall that dynamic evaluation amounts to performing all computations as if > were irreducible and =\/|)>> were a field with an algorithm for division. Whenever we wish to divide by a non-zero element > (with \>) that is not invertible, then |)>> provides us with a non-trivial factor of >. In that case, we launch an exception and redo all computations with |)>> or /gcd|)>> in the role of >. So let \\> be a formal root of > and define =|)>>, |)>=y>, |~>|)>=-|)>*\> and|)>=A>. Let ,\|)>> be a head chopper for the equation |~>*=*>, as computed using the algorithm from section. Then|)>>=,-\|)>>> is a tail chopper at > by Lemmas and. Let be a tail chopper for() at \|^>>. Let =|)>>, |)>=y>, |~>|)>=-|)>*\> and|)>=A> be as above, so that ,\|)>=T|)>>, is a head chopper for the equation |~>*=*>. In particular, rewriting linear combinations *y> with \|^>|)>|]>r>> as linear combinations |~>*> with |~>\|^>|]>r>>, we may head reduce|~>*> as described in section. Let |~>\|^>|]>r>> be such that |~>*=||~>*|\>>. Then we may rewrite|~>*> as an element *y> of |^>|)>|]>r>*y>. We call *y> the of *y> at > and write *y=|\*y|\>>>. Let |~>> be the finite set of exceptional indices for the above head reduction and =-|~>>. Setting >> and =val> U>, it can be checked that the following algorithm computes the tail reduction at >: <\named-specified-algorithm|TailReduce,\|)>>> \|^>|)>|]>r>> the tail reduction |\|\>>\|^>|)>|]>r>> of > at > <|named-specified-algorithm> <\indent> >=0> for all |)>\\> > Let |)>\\> be minimal with >\0> \>*U>> \\-c*|)>*U> Let us now study how head and tail reductions can be glued together into a global confined reduction on =\|]>r>*y>. More generally, we consider the case when =\|]>r>*y>, where \\> is a monic square-free polynomial such that > divides > for some \>. We assume that we have computed a head chopper for() and tail choppers >> for() at each the roots ,\,\>> of > in |^>>. In particular, we may compute the corresponding head and tail reductions <\eqnarray*> |\|\>:|^>r>*y>|>||^>r>*y>>||\|\>>:|^>|)>|]>r>*y>|>||^>|)>|]>r>*y,,\|)>.>>>> Given an element > of the Galois group of |^>> over >, we may also assume without loss of generality that the tail choppers were chosen such that |)>>=\>|)>> for all (note that this is automatically the case when using the technique of dynamic evaluation from section). Partial fraction decomposition yields |^>>linear mappings <\equation*> \>:|^>|]>r>\|)>*|^>|)>|]>r> and <\equation*> \>:|^>|]>r>\|^>r> with <\eqnarray*> >||>|)>+\>|)>+\+\>>|)>,>>>> for all \|^>|]>r>>. This allows us to define a global reduction *y|]>> of *y> by <\eqnarray*> *y|]>>|||\>|)>*y|\>+|\>|)>*y|\>>+\+|\>>|)>*y|\>>>.>>>> <\theorem> The formula )> defines a confined reduction on >. <\proof> Let > be an automorphism of |^>> over >. Then > naturally extends to |^>|]>r>*y> by setting *y|)>=\|)>*y> for all \|^>|]>r>>. Given \\|]>r>> and ,\|}>>, we have |)>>|)>=\>|)>|)>>. By our assumption that |)>>=\>|)>>, it follows that <\eqnarray*> |\|)>>|)>*y|\>|)>>>|||\>|)>*y|\>>|)>.>>>> Summing over all , we get *y|]>=\*y|]>|)>>. Since this equality holds for all automorphisms>, we conclude that *y|]>\\|]>r>*y>. Similarly, given \|^>|]>r>*y> with *y|]>=\ \>, we have |)>=\*y|]>|)>=y-*y|]>=\> for all automorphisms >, whence \\>. This shows that)> defines a reduction on >. For any *y> in the image of the restriction of |]>> to >, we finally observe that > \,\,val>> \>, and > are uniformly bounded, by construction. In other words, the reduction |]>> is confined. For actual implementations, one may perform the computations in extension fields =\/|)>>>, where > is an irreducible factor of > (or simply a square-free factor, while relying on dynamic evaluation as in section). Let ,\,\> be the roots of such an irreducible factor > and assume that we wish to compute |\>|)>*y|\>>+\+|\>|)>*y|\>>> for \\|]>r>>>. Instead of computing each |\>|)>*y|\>>> separately, one may use the formula <\eqnarray*> |\>|)>*y|\>>+\+|\>|)>*y|\>>>||/\>|\>>|)>*y|\>>>|)>,>>>> where >\x mod \> is the canonical root of > in > and /\>*y|)>=Tr/\>|)>*y> for all \|]>r>>>>. <\example> Let > be a fixed parameter. Consider the function <\eqnarray*> ||+u|)>>,>>>> which satisfies the differential equation <\eqnarray*> >|||x+u>*y.>>>> This equation admits two singularities at \>, where +u=0>. Any non-zero element of |^>|)>|)>|]>> is a tail chopper at >. Taking > as our tail chopper, we have <\eqnarray*> |)>*|)>>*y|)>>|||)>-1>*+\|)>*\++1+\|)>*|)>|)>*y>>>> for all |^>|)>|]>> and \\>. Given <\eqnarray*> >||||)>>+||)>>+\+\,>>>> its tail reduction at > is therefore recursively defined by <\eqnarray*> |\|\>>>||>|s=0>>||\-||)>>-+2-s|2*+1-s|)>*\>*||)>>|\>>>|s\0.>>>>>>>>> Now assume that we wish to compute the tail reduction <\equation*> |\>|)>*y|\>>+|\>|)>*y|\>> of <\equation*> \=+u|)>>=>*|)>>->*>+>*|)>>+>*> with respect to both roots > and > of +u>. We have <\eqnarray*> |\>|)>*y|\>>>|||>*|)>>->*>|\>>>>||||->*+2-2|2*+1-2|)>*\>*>->*>|\>>=||4*\*-1|)>>*>|\>>>>||||-1|4*\*-1|)>>*+2-1|2*+1-1|)>*\>|\>>=|-1|8*\*-\|)>>|\>>>>|||-1|8*\*-\|)>>=-1|8*u*-\|)>>.>>>> The above computation holds when considering > as a root of the polynomial +u> in the algebraic closure of >. Exactly the same computation can therefore be used for the other root> of this polynomial. The computation also holds for a generic root >> in the algebraic extension =/+u|)>>> and we obtain <\eqnarray*> |\>|)>*y|\>>+|\>|)>*y|\>>>||/\>-1|8*u*-\|)>>|)>=-1|4*u*-\|)>>.>>>> Given theconfined reduction |]>:\\\> from section, let us now give a general procedure how to turn it into a normal confined reduction |\|\>:\\\>. For this purpose, we assume that we know a >-subvector space > of> with the property that for any \> with \|]>>, we have \>. <\remark> It can be shown that there exist integers >,\,c>>>, and >> such that we can take <\eqnarray*> >||*y\val> \>|)>\c>,\,val>> \>>|)>\c>>,deg \>|)>\c>|}>.>>>> For equations() of bounded degree and size, Example shows that >> can become arbitrarily large, whence so can the dimension of >. For this reason, normal confined reductions can be computationally expensive, so it is usually preferable to rely on non-normalized reductions. One way to compute >,\,c>>>, and >> was detailed in6 and7>, but better approaches have been proposed since. Now let \ \\|]>> and let be a supplement of in |]>> so that |]>=V\W>. We may compute bases of and using straightforward linear algebra. The canonical >-linear projections :|]>\V> and :|]>\W>> with +\=Id> are also computable. We claim that we may take |f|\>\\|)>> for every \>. <\proposition> The mapping |\|\>:\\\;f\\|)>> defines a computable normal confined reduction on >. <\proof> The mapping |\|\>> is clearly a computable confined reduction on >. It remains to be shown that |f|\>=0> for all \>. Now |]>-f\\ \>, so |]>\\ \> and there exists a\> with =|]>>. Since \|]>>, it follows that \> and \\ \\|]>=V>. In other words, |]>=g\V> and |f|\>=\|]>|)>=0>. <\acknowledgments*> We would like to thank Pierre Lairez for a helpful remark. We also thank the second referee for pointing us to and for further helpful remarks and suggestions. <\bibliography|bib|tm-plain|all> <\bib-list|23> S.A.Abramov. EG-eliminations. , 5(4\U5):393\U433, 1999. A.Bostan, S.Chen, F.ChyzakZ.Li. Complexity of creative telescoping for bivariate rational functions. , 203\U210. New York, NY, USA, 2010. ACM. A.Bostan, S.Chen, F.Chyzak, Z.LiG.Xin. Hermite reduction and creative telescoping for hyperexponential functions. , 77\U84. ACM, 2013. A.Bostan, F.Chyzak, P.LairezB.Salvy. Generalized Hermite reduction, creative telescoping and definite integration of differentially finite functions. , 95\U102. New York, 2018. ACM. A.Bostan, L.DumontB.Salvy. Efficient algorithms for mixed creative telescoping. , 127\U134. ACM, 2016. A.Bostan, P.LairezB.Salvy. Creative telescoping for rational functions using the Griffiths-Dwork method. , 93\U100. ACM, 2013. S.Chen. . , École Polytechnique, 2011. S.Chen, M.van Hoeij, M.KauersC.Koutschan. Reduction-based creative telescoping for Fuchsian D-finite functions. , 85:108\U127, 2018. S.Chen, H.Huang, M.KauersZ.Li. A modified Abramov-Petkov>ek reduction and creative telescoping for hypergeometric terms. , 117\U124. ACM, 2015. S.ChenM.Kauers. Some open problems related to creative telescoping. , 30(1):154\U172, 2017. S.Chen, M.KauersC.Koutschan. Reduction-based creative telescoping for algebraic functions. , 175\U182. New York, NY, USA, 2016. ACM. F.Chyzak. . Habilitation, École polytechnique, 2014. J.Della Dora, C.DicrescenzoD.Duval. A new method for computing in algebraic number fields. G.GoosJ.Hartmanis, , 174, 321\U326. Springer, 1985. L.Dumont. . , École Polytechnique, 2016. K.Geddes, H.LeZ.Li. Differential rational normal forms and a reduction algorithm for hyperexponential functions. , 183\U190. ACM, 2004. C.Hermite. Sur l'intégration des fractions rationnelles. , 1:215\U218, 1972. J.vander Hoeven. Relax, but don't be too lazy. , 34:479\U542, 2002. J.vander Hoeven. Constructing reductions for creative telescoping. , HAL, 2017. . J.vander Hoeven. Creative telescoping using reductions. , HAL, 2018. . H.Huang. New bounds for hypergeometric creative telescoping. , 279\U286. ACM, 2016. P.Monsky. Finiteness of de Rham cohomology. , 94(1):237\U245, 1972. M.Ostrogradsky. De l'integration des fractions rationelles. , IV:147\U168, 1845. D.Zeilberger. The method of creative telescoping. , 11(3):195\U204, 1991. <\initial> <\collection> <\attachments> <\collection> <\associate|bib-bibliography> <\db-entry|+9izXaIC09Kv5vG|article|Zeil91> <|db-entry> > <\db-entry|+9izXaIC09Kv5s1|phdthesis|Chyz:hab> <|db-entry> > <\db-entry|+9izXaIC09Kv5rx|phdthesis|Chen:phd> <|db-entry> > <\db-entry|+9izXaIC09Kv5sM|phdthesis|Dum:phd> <|db-entry> > <\db-entry|+Ct2UJzyF1FJmm7|article|Ost1845> <|db-entry> > <\db-entry|+Ct2UJzyF1FJmlr|article|Her1872> <|db-entry> > <\db-entry|+1pqhNV562VulElCW|inproceedings|BCCL10> <|db-entry> S. F. Z. > <\db-entry|+Ct2UJzyF1FJmlU|inproceedings|BCCLX13> <|db-entry> S. F. Z. G. > <\db-entry|+Ct2UJzyF1FJmlo|inproceedings|GLL04> <|db-entry> H. Z. > <\db-entry|+Ct2UJzyF1FJmlc|inproceedings|CHKL15> <|db-entry> H. M. Z. > >ek reduction and creative telescoping for hypergeometric terms> <\db-entry|+Ct2UJzyF1FJmlt|inproceedings|Huang16> <|db-entry> > <\db-entry|+Ct2UJzyF1FJmlb|inproceedings|BLS16> <|db-entry> L. B. > <\db-entry|+9izXaIC09Kv5s3|inproceedings|CKK16> <|db-entry> M. C. > <\db-entry|+Ct2UJzyF1FJmla|inproceedings|BLS13> <|db-entry> P. B. > <\db-entry|+9NrtYe3notwUP3|article|CHKK18> <|db-entry> M. van M. C. > <\db-entry|+9NrtYe3notwUP4|article|CK17> <|db-entry> M. > <\db-entry|+bB8EKybvsgtMeJ|article|Mon72> <|db-entry> > <\db-entry|+Qvfc7gcMyadUaD|article|Abr99> <|db-entry> > <\db-entry|+9NrtYe3notwUPU|techreport|vdH:telescope-pre> <|db-entry> > > <\db-entry|+9NrtYe3notwUP2|inproceedings|BCLS18> <|db-entry> F. P. B. > <\db-entry|+Ct2UJzyF1FJmmR|techreport|vdH:redtel> <|db-entry> > > <\db-entry|+8lDHqURSvmeX87|article|vdH:relax> <|db-entry> > <\db-entry|+8lDHqURSvmeWxE|inproceedings|DDD85> <|db-entry> C. D. > J. > <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Zeil91 Chyz:hab Chen:phd Dum:phd Ost1845 Her1872 BCCL10 Chen:phd Dum:phd BCCLX13 GLL04 CHKL15 Huang16 BLS16 CKK16 BLS13 CHKK18 CK17 Mon72 CK17 Dum:phd Abr99 CKK16 BCCLX13 BLS16 CHKK18 vdH:telescope-pre BCLS18 vdH:redtel vdH:telescope-pre BCLS18 vdH:redtel Chyz:hab Dum:phd vdH:telescope-pre BCLS18 vdH:redtel vdH:redtel Abr99 DDD85 vdH:telescope-pre BCLS18 vdH:redtel Mon72 <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Creative telescoping> |.>>>>|> |2.1.Holonomic functions |.>>>>|> > |2.2.Creative telescoping |.>>>>|> > |2.3.Reduction-based creative telescoping |.>>>>|> > |math-font-series||font-shape||3.Head reduction> |.>>>>|> |3.1.Head choppers |.>>>>|> > |3.2.Head reduction |.>>>>|> > |math-font-series||font-shape||4.Row swept forms> |.>>>>|> |math-font-series||font-shape||5.Computing head choppers> |.>>>>|> |5.1.Transforming head choppers |.>>>>|> > |5.2.Head annihilators |.>>>>|> > |5.3.Computing head choppers |.>>>>|> > |5.4.Correctness and termination |.>>>>|> > |math-font-series||font-shape||6.Tail reduction> |.>>>>|> |6.1.Tail choppers |.>>>>|> > |6.2.Computing tail choppers |.>>>>|> > |6.3.Tail reduction |.>>>>|> > |math-font-series||font-shape||7.Global reduction> |.>>>>|> |7.1.Gluing together the head and tail reductions |.>>>>|> > |7.2.Machine computations |.>>>>|> > |7.3.Normalizing the reduction |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|>