> <\body> <\hide-preamble> >> >> >> >> >> >> >> \; |||<\author-address> de Mathématiques ( 425) CNRS, Université Paris-Sud 91405 Orsay Cedex France Email: >|<\doc-note> This work was partially supported by the ANR project. |>||> <\abstract> Let [[z]]> be the ring of power series over an effective ring >. In , it was first shown that differential equations over [[z]]> may be solved in an asymptotically efficient way using Newton's method. More precisely, if (n)> denotes the complexity in order two polynomials of degree > n> over >, then the first coefficients of the solution can be computed in time (n))>. However, this complexity does not take into account the dependency of on the order of the equation, which is exponential for the original method and linear for a recent improvement . In this paper, we present a technique to get rid of this constant factor, by applying Newton's method up to an order like and trading the remaining Newton steps against a lazy or relaxed algorithm in a suitable FFT model. Let [[z]]> be the ring of power series over an effective ring >. It will be convenient to assume that \\>. In this paper, we are concerned with the efficient resolution of implicit equations over[[z]]>. Such an equation may be presented in fixed-point form as <\equation> F=\(F), where is an indeterminate vector in [[z]]> with \>. The operator > is constructed using classical operations like addition, multiplication, integration or postcomposition with a series \[[z]]> with =0>. In addition, we require that the -th coefficient of (F)> depends only on coefficients > with n>, which allows for the recursive determination of all coefficients. In particular, linear and algebraic differential equations may be put into the form(). Indeed, given a linear differential system <\eqnarray*> >||>>|>||\>>>> where is an r> matrix with coefficients in [[z]]>, we may take (F)=I+ A*F>. Similarly, if is a tuple of polynomials in [[z]][F]=\[[z]][F,\,F]>, then the initial value problem <\eqnarray*> >||>>|>||\>>>> may be put in the form () by taking (F)=I+ P(F)>. For our complexity results, and unless stated otherwise, we will always assume that polynomials are multiplied using FFT multiplication. If > contains all >-th roots of unity with \>, then it is classical that two polynomials of degrees > n> can be multiplied using (n)=O(n*log n)> operations over > . In general, such roots of unity can be added artificially and the complexity becomes (n)=O(n*log n*log log n)>. We will respectively refer to these two cases as the and the FFT models. In both models, the cost of one FFT transform at order is > (n)/3>, where we assume that the FFT transform has been replaced by a TFT transform in the case when is not apower of two. Let(\)> be the set of r> matrices over >. It is classical that two matrices in(\)> can be multiplied in time >)> with \2.376> . We will denote by (n,r)> the cost of multiplying two polynomials of degrees > n> with coefficients in(\)>. By FFT-ing matrices of polynomials, one has (n,r)=O(n*r>+(n)*r)> and (n,r)\(n)*r> if . In , it was shown that Newton's method may be applied in the power series context for the computation of the first coefficients of the solution to () or() in time (n))>. However, this complexity does not take into account the dependence on the order, which was shown to be exponential in . Recently, this dependence in has been reduced to a linear factor. In particular, the first coefficients of the solution to() can be computed in time (n,r))>. In fact, the resolution of() in the case when and are replaced by matrices in (\[[z]])> (\)> can also be done in time (n,r))>. Taking >, this corresponds to the computation of a fundamental system of solutions. However, the new complexity is not optimal in the case when the matrix is sparse. This occurs in particular when a linear differential equation <\equation> f=L*f+\+L*f. is rewritten in matrix form. In this case, the method from for the asymptotically efficient resolution of the vector version of() as a function of gives rise to an overhead of , due to the fact that we need to compute a full basis of solutions in order to apply the Newton iteration. In , the alternative approach of relaxed evaluation was presented in order to solve equations of the form(). More precisely, let (n)> be the cost to compute terms of the product of two series \[[z]]>. This means that the terms of and are given one by one, and that we require > to be returned as soon as ,\,f> and ,\,g> are known (,n-1>). In , we proved that (n)=O((n)*log n)>. In the standard FFT model, this bound was further reduced in to (n)=O((n)*\>)>. We also notice that the additional or>)> overhead only occurs in FFT models: when using Karatsuba or Toom-Cook multiplication , one has(n)\(n)>. One particularly nice feature of relaxed evaluation is that the mere relaxed evaluation of (F)> provides us with the solution to(). Therefore, the complexity of the resolution of systems like() or() only depends on the sparse size of > as an expression, without any additional overhead in . Let (n,r)> denote the complexity of computing the first coefficients of the solution to(). By what precedes, we have both (n,r)=O((n)*r)> and (n,r)=O((n)*r*log n)>. Anatural question is whether we may further reduce this bound to (n,r)=O((n)*r)> or even (n,r)\(n)*r>. This would be optimal in the sense that the cost of resolution would be the same as the cost of the verification that the result is correct. A similar problem may be stated for the resolution of systems() or(). In this paper, we present several results in this direction. The idea is as follows. Given \>, we first choose a suitable n>, typically of the order >>. Then we use Newton iterations for determining successive blocks of coefficients of in terms of the previous coefficients of and . The product is computed using a lazy or relaxed method, but on FFT-ed blocks of coefficients. Roughly speaking, we apply Newton's method up to an order , where the overhead of the method is not yet perceptible. The remaining Newton steps are then traded against an asymptotically less efficient lazy or relaxed method without the overhead, but which is actually more efficient for small when working on FFT-ed blocks of coefficients. It is well known that FFT multiplication allows for tricks of the above kind, in the case when a given argument is used in several multiplications. In the case of FFT trading, we artificially replace an asymptotically fast method by a slower method on FFT-ed blocks, so as to use this property. We refer to (see also remark below) for a variant and further applications of this technique (called FFT caching by the author). The central idea behind is also similar. In section, we outline yet another application to the truncated multiplication of dense polynomials. The efficient resolution of differential equations in power series admits several interesting applications, which are discussed in more detail in . In particular, certified integration of dynamical systems at high precision is a topic which currently interests us. More generally, the efficient computation of Taylor models is a potential application. Given a power series \[[z]]> and similarly for vectors or matrices of power series (or power series of vectors or matrices), we will use the following notations: <\eqnarray*> >||+\+f*z>>|>||+\+f*z,>>>> where \> with j>. In order to simplify our exposition, it is convenient to rewrite all differential equations in terms of the operator =z*\/\ z>. Given a matrix \(\[[z]])> with =0>, the equation <\equation> \ M=A*M admits unique solution \(\[[z]])> with =Id>. The main idea of is to provide a Newton iteration for the computation of . More precisely, with our notations, assume that > and 1>=(M1>)> are known. Then we have <\equation> M\[M-(M*\1> M1>) (\ M-A*M)]. Indeed, setting <\eqnarray*> >||*M-\ M=O(z)>>|>||*\1> M1>) *\=O(z),>>>> we have <\eqnarray*> -A) \>|| M)*(M)1>*\+(1+O(z))*\-A*\>>|||+O(z))*(M1>+O(z))*\+(1+O(z))*\-A*\>>|||+O(z),>>>> so that -A) (M+\)=O(z)> and =M+O(z)>. Computing > and 1>> together using() and the usual Newton iteration <\equation> M1>=[M1>+M1>*(1-M*M1>)] for inverses yields an algorithm of time complexity (n,r))>. The quantities > and =\> may be computed efficiently using the middle product algorithm. Instead of doubling the precision at each step, we may also increment the number of known terms with a fixed number of terms . More precisely, given 0>, we have <\equation> M\[M-(M*\1> M1>) (\ M-A*M)]. This relation is proved in a similar way as (). The same recurrence may also be applied for computing blocks of coefficients of the unique solution \[[z]]> to the vector linear differential equation <\equation> \ F=A*F with initial condition =I\\>: <\equation> F\[F-(M*\1> M1>) (\ F-A*F)]. Both the right-hand sides of the equations () and () may be computed efficiently using the middle product. Assume now that we want to compute > and take n>. For simplicity, we will assume that with \> and that > contains all >-th roots of unity for \>. We start by decomposing > using <\eqnarray*> >||+\+F>>|>||>>>> and similarly for . Setting , we have <\equation*> P=(A+A)\F+\+(A+A)\F+(A+A)\F, where >> stands for the middle product (see figure). Instead of computing > directly using this formula, we will systematically work with the FFT transforms >> of > at order and similarly for +A> and >, so that <\equation*> P>=(A+A)>\F>+\+(A+A)>\F>+(A+A)>\F>. Recall that we may resort to TFT transforms if is not a power of two. Now assume that >, 1>> and <\equation*> P=P-(A+A)\F=((A+\+A)*(F+\+F)) are known. Then the relation () yields <\equation> F=(M*\1> M1>) P. In practice, we compute > via 1>*P)>, X> and =(M*Y)>, using FFT multiplication. Here we notice that the FFT transforms of > and 1>> only need to be computed once. <\big-figure||gr-frame|>|gr-geometry||gr-grid||1>|gr-grid-old||1>|gr-edit-grid-aspect|||>|gr-edit-grid||1>|gr-edit-grid-old||1>|gr-line-arrows|none|gr-fill-color|default|gr-text-at-valign|top|gr-text-at-halign|center|gr-grid-aspect|||>|gr-grid-aspect-props|||>||||>>>||>>||||>>||>||>||>|>|>>|>|>>|>|>>|>|>>|>|>>|>|>>|>|>>|>|>>||||>>>||>>>>> Illustration of the computation of > using middle products. Putting together what has been said and assuming that >, 1>> and > are known, we have the following algorithm for the successive computation of ,\,F>: <\body> ,k-1> +A)>\FFT(A+A)> ,k-1> <\indent> >\FFT(F)> )>\(A+A)>\F>+\+(A+A)>\F>> \FFT1>((P)>)> \(M*\1> M1>) P> In this algorithm, the product is evaluated in a lazy manner. Of course, using astraightforward adaptation, one may also use a relaxed algorithm. In particular, the algorithm from is particularly well-suited in combination with middle products. In the standard FFT model, is even faster. <\theorem> Consider the differential equation )>, where has non-zero entries. Assume the standard FFT model. Then there exists an algorithm which computes the truncated solution > to)> at order in time >(n)*(s+4/3*r)>, provided that . In particular, (n,r)\7/3*(n)*r>. <\proof> In our algorithm, we take r>\>, where > grows very slowly to infinity (such as =log log log n>), so that log n>. Let us carefully examine the cost of our algorithm for this choice of : <\enumerate> The computation of >, > and 1>> requires a time (m)*r)=o((n)*r)>. The computation of the FFT-transforms >\FFT(A)>, >\FFT(F)> and the inverse transforms \FFT1>(P>)> has the same complexity (m)*k*s\(n)*s> as the computation of the final matrix product at order . The computation of )> middle products +A)>\*F>> in the FFT-model requires a time *m*s)=O(k*n*s)>. Using a relaxed multiplication algorithm, the cost further reduces to (k)*m*s)=O(n*\*s*log r)=o((n)*s)>. The computation of the > using the Newton iteration () involves <\enumerate> Matrix FFT-transforms of > and 1>>, of cost (m)*r)=o((n)*r)>. vectorial FFT-transforms of cost > 4/3*(m)*k*r\4/3*(n)*r>. matrix vector multiplications in the FFT-model, of cost )=O(n*r)=o((n)*r)>. Adding up the different contributions completes the proof. Notice that the computation of n/k\*k-n\k> more terms has negligible cost, in the case when is not a multiple of . <\remark> In the synthetic FFT model, the recursive FFT-transforms >>, >> and >> require an additional space overhead, when using the polynomial adaptation of Schönhage-Strassen multiplication. Consequently, the cost in point now becomes <\equation*> O((k)*m*s*log n)=O(n*\*s*log r*log log r*log n)=O(n)*s***log*r*log log r|log log n>. Provided that r*log log r=o(log log n)>, we obtain the same complexity as in the theorem, by choosing > sufficiently slow. <\remark> With minor changes, the algorithm can be adapted in order to compute the unique solution of the matrix M=A*M> with =Id> (which corresponds to a fundamental system of solutions to ()). In that case, the complexity becomes > (n)*(r+4/3*r)\7/3*(n)*r>. <\remark> It is instructive to compare our complexity bounds with the old complexity bounds if we do not use FFT trading. In that case, let (n,r)> denote the complexity of computing both > and 1>>. One has <\equation*> (2*n,r)=(n,r)+5*(n)*r+O(n*r>), since the product *F> and the formulas() and () give rise to matrix multiplications. This yields (n,r)\5*(n)*r> from which we may subtract (n)*r> in the case when 1>> is not needed. The bound may be further improved to (n,r)\9/2*(n)*r> using. Similarly, the old bound for the resolution of() is > (n)*(17/6*r+s/2+2/3*r)>, or > (n)*(31/12*r+s/2+2/3*r)> when using. <\remark> In point 3 of the proof, the computation of the middle products using anaive lazy algorithm requires a time >*(n)*s)>. In practice, we may actually take=0>, in which case there is no particular penalty when using a naive algorithm instead of a relaxed one. In fact, for larger values of , it is rather the hypothesis which is easily violated. In that case, one may take k\r> instead of r> and still gain a constant factor between and . <\remark> The results of this section apply in particular to the computation of the exponential > of a power series . In that case, theorem provides a way to compute> in time > 7/3*(n)>, which yields an improvement over . Notice that FFT trading is a variant of Newton caching in , but not exactly the same: in our case, we use an ``order '' Newton iteration, whereas Bernstein uses classical Newton iterations on block-decomposed series. In the case of power series division at order or division with remainder of a polynomial of degree > 2*n> by a polynomial of degree > n>, the present technique also allows for improvements . In both cases, the new complexity is > 5/3*(n)>. In addition, we notice that the technique of FFT trading allows for a ``smooth junction'' between the Karatsuba (or Toom-Cook) model and the FFT model. Assuming that one is able to solve the linearized version of an implicit equation(), it is classical that Newton's method can again be used to solve the equation itself. Before we show how to do this for algebraic systems of differential equations, let us first give a few definitions for polynomial expressions. Given a vector [[z]]> of series variables, we will represent polynomials in [[z]][F]\\[F][[z]]=\[[z]][F,\,F]> by dags (directed acyclic graphs), whose leaves are either series in[[z]]> or variables >, and whose inner nodes are additions, subtractions or multiplications. An example of such a dag is shown in figure. We will denote by > and> the number of nodes which occur as an operand result of a multiplication. We call +s)/3> the of the dag and the total number of nodes the of the dag. Using the FFT, one may compute > in terms of > in time >(n)*s+n*a>. <\big-figure||gr-frame|>|gr-geometry||gr-grid||gr-grid-old||1>|gr-edit-grid-aspect|||>|gr-edit-grid||1>|gr-edit-grid-old||1>|gr-grid-aspect|||>|gr-grid-aspect-props|||>|gr-text-at-halign|center||>>|>|>>|>|>>|>|>>|>|>>|>|>>|>|>>||>||>||>||>||>||>|||>|||>>>> Example of a polynomial expression in [[z]][F,F]>, represented by a dag. In this particular example, the multiplicative size of the polynomial is (since =4> and =3>)and its total size . Notice in particular that squares only count for in the multiplicative size. Now assume that we are given an -dimensional system <\equation> \ F=P(F), with initial condition =I\\>, and where is a tuple of polynomials in [[z]][F,\,F]\\[F,\,F][[z]]>. Given the unique solution to this initial value problem, consider the Jacobian matrix <\equation*> J= P|\ F> (F)= P|\ F>>|>| P|\ F>>>|>||>>| P|\ F>>|>| P|\ F>>>>>>(F). Assuming that > is known, we may compute > in time (m))> using automatic differentiation. As usual, this complexity hides an -dependent overhead, which is bounded by (n)))>. For m>, we have <\eqnarray*> +F)>||)+J*F+O(z)>>| F>||)+J*F,>>>> so that <\equation> F=[(\-J)1> (P(F))]. Let us again adopt the notation(). Having determined > and > for each subexpression of up to a given order , the computation of > and all > can be done in three steps: <\enumerate> The computation of all )>, using lazy or relaxed evaluation, where =F+\+F>. The determination of =(\-J)1> P()> using (). The computation of all >. We notice that > and )> are almost identical, since <\equation*> Q(F)-Q()= Q|\ F> (F)*F. If is a product, then )> can be determined from )> and )> using a suitable middle product with ``omitted extremes'' (see figure ). Step 3 consists of an adjustment, which puts these extremes back in the sum. Of course, the computations of products can be done in a relaxed fashion. <\big-figure||gr-frame|>|gr-geometry||gr-grid||gr-grid-old||1>|gr-edit-grid-aspect|||>|gr-edit-grid||gr-edit-grid-old||1>|gr-line-arrows|none|gr-dash-style|default|gr-text-at-halign|center|gr-fill-color|pastel blue||||>>>||>>||||>>>||>>||||>>||>||>||>>|||>>|||>>||>>||>>>>> Illustration of the product =(U*V)(F)>. The part inside the square corresponds to )> and the two small triangles to the difference -Q()>. <\theorem> Consider an -dimensional system )>, where is a polynomial, given by a dag of multiplicative size and total size . Assume the standard FFT model. Then there exist an algorithm which computes > in time (n)*(s+4/3*r)+O(n*t)>, provided that . <\proof> When working systematically with the FFT-ed values of the >, steps and give rise to a cost (n)*s> for the FFT transforms and a cost for the scalar multiplications and additions. In a similar way as in the proof of theorem, the computation of the> gives rise to a cost > 4/3*(n)*r>. Again, the cost of the computation of the initial> and > is negligible. <\remark> The bound becomes (n)*(s+4/3*r)+O(t*n*log n)> in the synthetic FFT model and under the assumption r*log log r=o(log log n)>. This bound is derived in asimilar way as in remark . <\remark> A detailed comparison between the new and old complexities is difficult, because the size parameter is not entirely adequate for expressing the old complexity. In the worst case, the old complexity is > (n)*(s*r+8/3*r+4/3*r)+O(t*r*n)>, which further improves to > (n)*(s*r+13/6*r+4/3*r)+O(t*r*n)> using . However, the factor is quite pessimistic, since it occurs only when most of the subexpressions of depend on most of the variables ,\,F>. If the multiplicative subexpressions depend on an average number of> variables >, then the process of automatic differentiation can be optimized so as to replace by > in the bound (roughly speaking). It is well-known that discrete FFT transforms are most efficient on blocks of size > with \>. In particular, without taking particular care, one may lose a factor when computing the product of two polynomials and of degree with 2>>. One strategy to remove this problem is to use the TFT (truncated Fourier transform) as detailed in with some corrections and further improvements in . An alternative approach is to cut and into n/m\> parts of size >, where grows slowly to infinity with . Let us denote <\eqnarray*> >||+\+P*z>>|>||>>>> Attention to the minor change with respect to the notations from section. Now we multiply and by <\enumerate> FFT-ing the blocks > and > at size . Naively multiplying the resulting FFT-ed polynomials in >: <\eqnarray*> >>||>+\+P>*U>>|>>||>+\+Q>*U>>>> Transforming the result back. This approach has a cost > C*k*m*log m+2*k*m\C*n*log n+2*k*n> which behaves more ``smoothly'' as a function of . In this particular case, it turns out that the TFT transform is always better, because the additional linear factor is reduced to . However, in the multivariate setting, the TFT also has its drawbacks. More precisely, consider two multivariate polynomials \[z,\,z]> whose supports have a ``dense flavour''. Typically, we may assume the supports to be convex subsets of >. In addition one may consider truncated products, where we are only interested in certain monomials of the product. In order to apply the TFT, one typically has to require in addition that the supports of and are initial segments of >. Even then, the overhead for certain types of supports may increase if gets large. One particularly interesting case for complexity studies is the computation of the truncated product of two dense polynomials and with total degree > n>. This is typically encountered in the integration of dynamical systems using Taylor models. Although the TFT is a powerful tool for small dimensions 4)>, FFT trading might be an interesting complement for moderate dimensions d\8)>. For really huge dimensions, one may use or . The idea is again to cut in blocks <\equation*> |||||,\,i)>P*U>|>|=U>*\U>)>>|>||(m,\,m)>P*z>||=z>*\z>)>>>>> where n/m\> is small (and preferably a power of two). Each block is then transformed using a suitable TFT transform (notice that the supports of the blocks are still initial segments when restricted to the block). We next compute the truncated product of the TFT-ed polynomials P>*U> and Q>*U> in a naive way and TFT back. The naive multiplication step involves <\equation*> N=*+*+\+* multiplications of TFT-ed blocks. We may therefore hope for some gain whenever *(n/k)> is small with respect to (n)\d*(n)>. We always gain for and usually also for , in which case \2*d>. Even for , when \4/3*d>, it is quite possible that one may gain in practice. The main advantage of the above method over other techniques, such as the TFT, is that the shape of the support is preserved during the reduction P*z\P*U> (as well as for the ``destination support''). However, the TFT also allows for some additional tricks and it is not yet clear to us which approach is best in practice. Of course, the above technique becomes even more useful in the case of more general truncated multiplications for dense supports with shapes which do not allow for TFT multiplication. \ For small values of , we notice that the pair/odd version of Karatsuba multiplication presents the same advantage of geometry preservation (see for the one-dimensional case). In fact, fast multiplication using FFT trading is quite analogous to this method, which generalizes for Toom-Cook multiplication. In the context of numerical computations, the property of geometry preservation is reflected by increased numerical stability.\ To finish, we would like to draw the attention of the reader to another advantage of FFT trading: for really huge values of , it allows for the reduction of the memory consumption. For instance, assume that we want to multiply two truncated power series and at order. With the above notations, one may first compute >,\,P>>. For ,k-1>, we next compute >>, >=P>*Q>+\+P>*Q>> and 1>(R>)>. The idea is now that >,\,P>,R>,\,R>> are no longer used at stage , so we may remove them from memory. We have summarized the main results of this paper in tables and . We recall that (n)=O((n)*\>)> in the standard FFT model and (n)=O((n)*log n)> otherwise. In practice, we expect that the factor >)> behaves very much like a constant, which equals at the point where we enter the FFT model. Consequently, we expect the new algorithms to become only interesting for quite large values of . We plan to come back to this issue as soon as implementations of all algorithms will be available in the library . On the other hand, Newton iterations are better suited to parallel computing than relaxed evaluation. An interesting remaining problem is to reduce the cost of computing afundamental system of solutions to(). This would be possible if one can speed up the joint computation of the FFT transforms of f,\,\ f>. <\big-table|||||||||||-dimensional system of linear differential equations>||>|||>||> (n)*r>>|> (n)*s>>>||> 4*(n)*r>>|> (n)*(17/6*r+s/2+2/3*r)>>>||> 7/3*(n)*r>>|> (n)*(s+4/3*r)>>>>>>> Complexities for the resolution of an -dimensional system F=A*F> of linear differential equations up to terms. We either compute a fundamental system of solutions or a single solution with aprescribed initial condition. The parameter stands for the number of non-zero coefficients of the matrix (we always have r>). We assume that in the standard FFT model and r*log log r=o(log log n)> in the synthetic FFT model. <\big-table|||||||||-dimensional system of algebraic differential equations>|>||>||> (n)*s+O(n*t)>>>||> (n)*(s*r+8/3*r+4/3*r)+O(t*r*n)>>>||> (n)*(s+4/3*r)+O(t*n)>>>>>>> Complexities for the resolution of an -dimensional system F=P(F)> up to terms, where is a polynomial of multiplicative size and total size . For the bottom line, we assume the standard FFT model and we require that . In the synthetic FFT model, the bound becomes > (n)*(s+4/3*r)+O(t*n*log n)>, under the assumption r*log log r=o(log log n)>. A final interesting question is to which extent Newton's method can be generalized. Clearly, it is not hard to consider more general equations of the kind <\equation*> \ F=P(F,F(z),\,F(z)), since the series ),\,F(z)> merely act as perturbations. However, it seems harder (but maybe not impossible) to deal with equations of the type <\equation*> \ F=P(F,F(q*z)), since it is not clear how to generalize the concept of a fundamental system of solutions and its use in the Newton iteration. In the case of partial differential equations with initial conditions on a hyperplane, the fundamental system of solutions generally has infinite dimension, so essentially new ideas would be needed here. Nevertheless, the strategy of relaxed evaluation applies in all these cases, with the usual overhead in general and >)> overhead in the synthetic FFT model. <\bibliography|bib|alpha|all> <\bib-list|BCO+06> A.Bostan, F.Chyzak, F.Ollivier, B.Salvy, É. Schost, and A.Sedoglavic. Fast computation of power series solutions of systems of differential equation. preprint, april 2006. submitted, 13 pages. D.Bernstein. Removing redundancy in high precision Newton iteration. Available from . R.P. Brent and H.T. Kung. Fast algorithms for manipulating formal power series. , 25:581--595, 1978. D.G. Cantor and E.Kaltofen. On fast multiplication of polynomials over arbitrary algebras. , 28:693--701, 1991. S.A. Cook. . PhD thesis, Harvard University, 1966. J.W. Cooley and J.W. Tukey. An algorithm for the machine calculation of complex Fourier series. , 19:297--301, 1965. Guillaume Hanrot, Michel Quercia, and Paul Zimmermann. The middle product algorithm I. speeding up the division and square root of power series. , 14(6):415--438, 2004. G.Hanrot and P.Zimmermann. Newton iteration revisited. zimmerma/papers/fastnewton.ps.gz>. Guillaume Hanrot and Paul Zimmermann. A long note on Mulders' short product. Research Report 4654, INRIA, December 2002. Available from hanrot/Papers/mulders.ps>. A.Karatsuba and J.Ofman. Multiplication of multidigit numbers on automata. , 7:595--596, 1963. R.Lohner. . PhD thesis, Universität Karlsruhe, 1988. R.Lohner. On the ubiquity of the wrapping effect in the computation of error bounds. In U.Kulisch, R.Lohner, and A.Facius, editors, , pages 201--217. Springer, 2001. G.Lecerf and É. Schost. Fast multivariate power series multiplication in characteristic zero. , 5(1):1--10, September 2003. K.Makino and M.Berz. Remainder differential algebras and their applications. In M.Berz, C.Bischof, G.Corliss, and A.Griewank, editors, , pages 63--74, SIAM, Philadelphia, 1996. K.Makino and M.Berz. Suppression of the wrapping effect by taylor model-based validated integrators. Technical Report MSU Report MSUHEP 40910, Michigan State University, 2004. R.T. Moenck and J.H. Carter. Approximate algorithms to derive exact solutions to systems of linear equations. In , volume72 of , pages 65--73, Berlin, 1979. Springer. R.E. Moore. . Prentice Hall, Englewood Cliffs, N.J., 1966. V.Pan. , volume 179 of Springer, 1984. G.Schulz. Iterative Berechnung der reziproken Matrix. , 13:57--59, 1933. A.Schönhage. Variations on computing reciprocals of power series. , 74:41--46, 2000. A.Schönhage and V.Strassen. Schnelle Multiplikation grosser Zahlen. , 7:281--292, 1971. V.Strassen. Gaussian elimination is not optimal. , 13:352--356, 1969. A.L. Toom. The complexity of a scheme of functional elements realizing the multiplication of integers. , 4(2):714--716, 1963. J.vander Hoeven. Lazy multiplication of formal power series. In W.W. Küchlin, editor, , pages 17--20, Maui, Hawaii, July 1997. J.vander Hoeven. Relax, but don't be too lazy. , 34:479--542, 2002. J.vander Hoeven. New algorithms for relaxed multiplication. Technical Report 2003-44, Université Paris-Sud, Orsay, France, 2003. J.vander Hoeven. Relaxed multiplication using the middle product. In Manuel Bronstein, editor, , pages 143--147, Philadelphia, USA, August 2003. J.vander Hoeven. The truncated Fourier transform and applications. In J.Gutierrez, editor, , pages 290--296, Univ. of Cantabria, Santander, Spain, July 4--7 2004. J.vander Hoeven. Notes on the Truncated Fourier Transform. Technical Report 2005-5, Université Paris-Sud, Orsay, France, 2005. J.vander Hoeven. On effective analytic continuation. Technical Report 2006-15, Univ. Paris-Sud, 2006. J.vander Hoeven et al. Mmxlib: the standard library for Mathemagix, 2002--2006. . Winograd and Coppersmith. Matrix multiplication via arithmetic progressions. In > Annual Symposium on Theory of Computing>, pages 1--6, New York City, may 25--27 1987. <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> BK78 vdH:relax BCOSSS06 CT65 SS71 CK91 vdH:relax vdH:tft vdH:tft-note Str69 Pan84 WC87 BK78 vdH:relax BCOSSS06 BCOSSS06 vdH:issac97 vdH:relax vdH:issac97 vdH:relax vdH:newrelax Kar63 Toom63b Cook66 Bern:fnewton vdH:newrelax vdH:relax vdH:riemann Moo66 Loh88 MB96 Loh01 MB04 BCOSSS06 Sch33 MC79 HQZ04 vdH:tft vdH:tft-note vdH:issac03 vdH:newrelax CK91 vdH:relax Sch00 Sch00 Bern:fnewton HZ04 Bern:fnewton Bern:fnewton HZ04 BK78 vdH:relax BCOSSS06 BCOSSS06 Sch00 vdH:tft vdH:tft-note LeSc03 vdH:relax vdH:tft-note HaZi02 vdH:mml <\associate|figure> <\tuple|normal> Illustration of the computation of |P> using middle products. > <\tuple|normal> Example of a polynomial expression in |\[[z]][F,F]>, represented by a dag. In this particular example, the multiplicative size of the polynomial is |s=7/3> (since |s=4> and |s=3>)and its total size |7>. Notice in particular that squares only count for |2/3> in the multiplicative size. > <\tuple|normal> Illustration of the product |Q(F)=(U*V)(F)>. The part inside the square corresponds to |Q()> and the two small triangles to the difference |Q(F)-Q()>. > <\associate|table> <\tuple|normal> Complexities for the resolution of an |r>-dimensional system |\ F=A*F> of linear differential equations up to |n> terms. We either compute a fundamental system of solutions or a single solution with a |->|-0.3em|>|0em||0em||>>prescribed initial condition. The parameter |s> stands for the number of non-zero coefficients of the matrix |A> (we always have |s\r>). We assume that |r=o(log n)> in the standard FFT model and |log r*log log r=o(log log n)> in the synthetic FFT model. > <\tuple|normal> Complexities for the resolution of an |r>-dimensional system |\ F=P(F)> up to |n> terms, where |P> is a polynomial of multiplicative size |s> and total size |t>. For the bottom line, we assume the standard FFT model and we require that |r=o(log n)>. In the synthetic FFT model, the bound becomes ||\> ||M>>(n)*(s+4/3*r)+O(t*n*log n)>, under the assumption |log r*log log r=o(log log n)>. > <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Linear differential equations> |.>>>>|> |math-font-series||font-shape||3.Algebraic differential equations> |.>>>>|> |math-font-series||font-shape||4.Truncated multiplication> |.>>>>|> |math-font-series||font-shape||5.Conclusion> |.>>>>|> |math-font-series||font-shape||Bibliography> |.>>>>|>