> <\body> stiff differential equations>|||||<\author-affiliation> CNRS, Laboratoire d'informatique Campus de l'École polytechnique 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau France Email: >>|>> In this paper, we present various new algorithms for the integration of stiff differential equations that allow the step size to increase proportionally with time. We mainly focus on high precision integrators, which leads us to use high order Taylor schemes. We will also present various algorithms for certifying the numerical results, again with the property that the step size increases with time. ||> Consider the differential equation <\eqnarray*> |\>+\*\>|||)>,>>>> where \\d>> is a diagonal matrix with positive real entries \\\\\>, and where\\>,\,F>|]>> is a polynomial forcing term. We are interested in analytic solutions :I\\>, where is either an interval of the form > with 0>, or alarger subset of > that contains an interval of this form. Thecomponents of this mapping will be written >,\,\>:I\\>, whereas subscripts will be reserved for the coefficients of local power series solutions. A similar notation will be used for the components of other vectors and matrices. If the largest eigenvalue > of > gets large with respect to the forcing term >, then the equation() is said to be . Generic numerical integration schemes experience difficulties with this kind of equations, whether we use Euler's method, a Runge\UKutta scheme, or high order algorithms based on Taylor expansions. Roughly speaking, the problem is that all generic methods become inaccurate when the step size exceeds>. One of the most difficult cases is when is large, but each of the quotients /\> remains reasonably close to one. This happens for instance when =k>, a case that is naturally encountered when discretizing certain partial differential equations. Implicit integration schemes allow for larger step sizes than generic methods, but involve the computation and inversion of Jacobian matrices, which may be expensive in high dimension. But even such more expensive schemes only seem to deal with the step size issue in a somewhat heuristic manner: one typically selects a scheme that is \Pstable\Q when applied to any equation +\*\=0>> for all \0>, and then hopes that the scheme continues to behave well for the actual target equation(). We refer to17.5> and6> for more information on classical integration schemes for stiff differential equations. We are mainly interested in high precision computations, in which case it is natural to use high order integration schemes that are based on Taylor expansions. In this context, we are not aware of any numerical method that allows the step size to increase proportionally with time. The main aim of this paper is to present such a method, together with several variants, as well as a way to compute rigorous bounds for the error. Our fastest method is an explicit scheme, but its convergence deteriorates when the eigenvalues> are close. The slower and rigourous counterparts rely on the computation of Jacobianmatrices. In sections and, we recall various approaches that can be used to compute truncated series solutions to initial value problems and how to derive high order integration schemes from this. More precisely, given a numerical approximation |t|\>> for > at time, we compute the first terms of a power series solution |t|\>> to the initial value problem <\equation> f|t|\>|\ z>+\*f|t|\>=\|t|\>|)>,f|t|\>=\|t|\> and return |t|\>+f|t|\>*\+\+f|t|\>*\> as an approximation of |)>> for some suitable step size >. The inaccuracy of such schemes for larger step sizes is due to the fact that the th coefficient of |t|\>> tends to change proportionally to /k!> for slight perturbations of|t|\>>. If > is large, then even small multiples of /k!> quickly dominate|t|\>>, which severely limits the step sizes that we can use. This numerical instability is an unpleasant artifact of traditional methods for the integration of differential equations. It is well know that solutions of stiff equations tend to become very smooth after a certain period of time (in which case one says that the system has reached its ), but this does not lead to larger step sizes, as we would hopefor. In order to make this idea of \Psmoothness after a certain period of time\Q more precise, it is instructive to study the analytic continuation of > in the complex plane. In section, for a fixed initial condition=c> and a fixed forcing term >, we show the existence of acompact half disk =\>\\R,Re t\0|}>>, 0>, on which the solution > of() is analytic, for choice of >. In particular, for >, the radius of convergence |t|\>> of the exact Taylor expansion >|t|\>> of > at time is at least . Setting H> |\|>>, Cauchy's formula thus yields the bound >|t|\>|\|>\K/t> for all \>. If \>, then this means that efficient integration schemes should allow for step sizes of the order *t> at time , for some fixed constant \0>. In section, we will present various such schemes. They are all based on the computation of good approximations |t|\>=f|t|\>+f|t|\>*z+\+f|t|\>*z> of the exact Taylor expansion >|t|\>> of > at time and order . By what precedes, such an approximation cannot be obtained by solving(). Instead, for a suitable so-called \,d|}>> that depends on , we will solve the following so-called : <\equation> f|t|\>|\ z>+\*f|t|\>=\|t|\>|)>,|t|\>>=\|t|\>>>|i\\>>||t|\>>=0>|i\\>>>>>. Intuitively speaking, the system has reached a steady state for the eigenvalues > with\>>. Such eigenvalues > are \Plarge\Q, so it is more accurate to determine coefficients |t|\>>> as a function of later coefficients |t|\>,\,f|t|\>> rather than earlier ones. Furthermore, the coefficients >|t|\>>> are so \Psmall\Q that they can be approximated by zeros; this explains the |t|\>>=0>>. The remaining eigenvalues > with \>> are small enough for the step size under consideration that the dependence of |t|\>> on the initial condition |t|\>>> is sufficiently moderate for() to admit an accurate solution. We say that the system() is in a for such eigenvalues > at time . A final topic is the computation of rigorous error bounds for the numerical integration process. For differential equations that are not stiff, this is a classical theme in interval analysis. It has long been an open problem to develop efficient reliable integrators for stiff differential equations. We report on progress in this direction in section. Consider the equation() for a fixed initial condition =c\\> and a fixed forcing term >, but where we will allow ourselves to vary >. Our main aim is to show that there exists a compact \Phalf disk\Q <\eqnarray*> >||\\\R,Re t\0|}>>>>> with 0> on which the solution > to() exists and is analytic, independently of the choice of >. We will both prove a weak and simpler version of this result and a stronger version that allows us to take larger radii and that is also more convenient if we want to actually compute a radius that works. We will need a few more notations. We denote the interior of > by <\equation*> H>=\\\R>,Re t\0|}>. Given a non-empty open subset \>, we write > for the Banach space of analytic functions \> with finite sup-norm <\equation*> |g|\<\|\|\>>=supU> |\|>. We will often use vector notation: given \>, \>, and \>|)>>, we write <\eqnarray*> >||>|\|>,\,>|\|>|)>>>||g|\<\|\|\>>>|||g>|\<\|\|\>>,\,|g>|\<\|\|\>>|)>>>|s>|>|>\s>\\\r>\s>.>>>> In addition to such \Pcomponentwise\Q norms, we will also use more traditional sup-norms: given \>, \>, and an n> matrix \d>>, we define <\eqnarray*> >>||>|\|>,\,>|\|>|)>>>||g|\<\|\|\>>>>|||g>|\<\|\|\>>,\,|g>|\<\|\|\>>|)>>>|>>||\,>=1> >.>>>> We will also need two well known results from complex analysis, namely an explicit version of Cauchy-Kovalevskaya's theorem and a variant of Montel's theorem. <\theorem> Let =\-\*F>, =>>, and consider the initial value problem <\eqnarray*> |\>>|||)>,\=c.>>>> Writing =,\,i>\,\,i>*>|)>>*\*>|)>>>, let <\equation*> M=max,\,i> ,\,i>|\|>**d|)>+\+i>. Then )> admits a unique analytic solution on the open disk with center and radius <\equation*> \=|4*M>. <\proof> Our assumptions imply that <\eqnarray*> |\>|\>>|||1-|\>>+\+|\>>|2*m*d>>,||\>>=\=|\>>=m|\>>>>> constitutes a \Pmajorant equation\Q for() in the sense of Cauchy-Kovalevskaya. By symmetry, each of the components |\>>> of this equation satisfies the simpler equation <\eqnarray*> >>|||1->>,g=m,>>>> which admits the explicit solution <\eqnarray*> ||*t|m>>|)>*m,>>>> of radius >. Since the formal power series solution of() satisfies >|\|>\|\>>|\|>=g> for all ,d|}>>> and\>, it follows that > has radius of convergence at least >. <\theorem> Given a non-empty subset of > and a bounded sequence ,g,\\\>, we can extract a subsequence >,g>,\> that converges uniformly to a limit >\\> on every compact subset of . <\proof> If , then this is Montel's theorem. The general case is obtained using an easy induction on : we first extract a subsequence >,g>,\> such that >>,g>>,\> converges to a limit in > and then apply the induction hypothesis to this subsequence for the remaining components. Let >=\ \/\ F> denote the Jacobian matrix of >. <\theorem> Let 0>, >|)>>, and 1> be such that for all \>, we have <\eqnarray*> \B>|>||\|>*R\B.>>|\B>|>|>|\|>>*R\C.>>>> Then, for any choice of >, the initial value problem <\equation> |\>+\*\=\|)>,lim0> \=c admits a unique analytic solution on >>. In addition, we have -c|\|>\B> for all H>>. <\proof> Consider the operator :\>|)>\\>|)>> with <\eqnarray*> >||*t>*\*u>*\|)>*\ u.>>>> The fixed points \\>|)>> of this operator are precisely the solutions of(). Let us prove the existence of such a fixed point. The uniqueness follows by analytic continuation from the well known uniqueness of the solution of the initial value problem() on a neighbourhood of the origin. Let us first notice that > maps the ball <\equation*> \=\>|)>\|g-c|\<\|\|\>>\B|}> with center and radius into itself. Indeed, given \> and H>>, our hypothesis() implies <\eqnarray*> *u>*\|)>|\|>>|>|*Re u>*.>>>> For all H>>, it follows that <\eqnarray*> ||\*u>*\|)>*\ u|\|>>|>|*\*Re u>*\ u>>||>|*\*Re t>*\*Re >*\ u>>||>|*\*Re t>*\B*\*Re t>,>>>> whence -c|\|>\B>, as desired. Let us next show that > is actually contracting on >. Given \>, consider the homotopy >=|)>*g+\*h\\> with \>. From(), we get <\eqnarray*> |||\-\|\<\|\|\>>>>|||-1>|>|)>|\ \>*\ \||-1>>>>>||>||J>>|)>*|\<\|\|\>>>*\ \>>|||supH>>>>|)>*-g|)>|\|>>*\ \>>||>|*supH>>-g|\|>>*\ \>>|||*|h-g|\<\|\|\>>>.>>>> It follows that <\eqnarray*> |||\*u>*|)>-\|)>|)>*\ u|\<\|\|\>>>>|>|*|h-g|\<\|\|\>>>*\*Re u>*\ u>>||>|*|h-g|\<\|\|\>>>*\*Re t>*\*Re >*\ u>>||>|*|h-g|\<\|\|\>>>*\*Re t>*\C*|h-g|\<\|\|\>>>*\*Re t>>>>> and <\eqnarray*> |\-\|\<\|\|\>>>>|>||h-g|\<\|\|\>>>.>>>> This shows that > is indeed contracting on >. Since > is complete as aclosed bounded subspace of >|)>>, we conclude that ,g,\>> converges to afixed point \\>> of>. It turns out that the condition() on the Jacobian is not really needed. Moreover, the solution can further be extended to the closure > of >>. <\theorem> Let 0> and >|)>> be such that for all \>, we have <\eqnarray*> \B>|>||\|>*R\B.>>>> Then, for any choice of >, the initial value problem <\equation> |\>+\*\=\|)>,\=c admits a unique analytic solution on >. In addition, we have -c|\|>\B> for all H>. <\proof> Let :\>|)>\\>|)>> be as in the proof of Theorem and notice that > still maps > into itself. Now consider the sequence ,g,\> with \\\\>. Applying Montel's theorem, we obtain a subsequence >> that converges uniformly to alimit >\\> on every compact subset of >>. We claim that >> is a fixed point of >. Indeed, for a sufficiently small \0> with \R>, we have >|\|>>*R\> for all \> with \B>. As in the proof of Theorem, this means that >> is contracting on >>. Consequently, the restrictions >,g>,\> of the functions ,g,\> to >>> with >=\>> tend to a fixed point >>>, and so does the subsequence >>>. Now this fixed point >>> coincides with >> on>>> and also solves the initial value problem(). By analytic continuation, >> therefore solves the same initial value problem on >>, which completes the proof of our claim. It remains to be shown that>> can be extended to an analytic function > on > that satisfies -c|\|>\B> for all H>. Since >=\>|)>-\*g>> on >>, we first notice that >|\|>> is bounded on >>, whence>> continuously extends to a function > that is defined on the whole of >, and we clearly have -c|\|>\B> for all H>. Now let us consider the unique solution |0\\|\>> to the initial value problem +\*\=\|)>> with =\>>. Theorem provides us with a lower bound |0\\|\>\0>> for the radius of convergence of |0\\|\>>, which depends continuously on >. By the compactness of >, it follows that there exists some \0> with |0\\|\>\\> for all H>. Now consider on the boundary H> of > and let \H>> be such that -t|\|>\\>. Then |0\\|)>|\>\\> and |0\\|)>|\>=\+z|)>> on some neighbourhood of >. We conclude that |0\\|)>|\>|)>> is an analytic extension of > to the open disk \\|\|>\\|}>> that contains . Before we present algorithms for the numeric integration of(), let us first consider the question how to compute formal power series solutions. Since we will later be considering power series solutions at various times , we will use the letter instead of > for such solutions. Notice that a vector \|]>> of formal power series can also be regarded as a formal power series +f*z+f*z+\\\|]>> with coefficients in>. Let \> be the initial condition. Setting =\-\*f>, we are thus interested in the computation of truncated power series solutions to the equation <\eqnarray*> f|\ z>>||,f=c.>>>> Alternatively, this equation can be rewritten in integral form <\eqnarray*> ||\,>>>> where the integration operator > sends \|]>> to *z+*g*z+*g*z+\>. In this section, we recall several approaches to the computation of power series solutions to such equations. The efficiency of each method can be measured in terms of the number of field operations in> that are required to compute the first terms of the solution. For the time being, we assume that all computations are done at a fixed bit precision , so that operations in> are done at unit cost. One may also take into account the number of scalar operations that are necessary for the evaluation of >. A further refinement is to separately count the number > of multiplications that are needed. For instance, if and =f+3|)>\f-100\f>, then we have and =3>. We always assume that >. For the first method, we systematically compute with truncated power series of order in /|)>>, which are also called of order . One addition in |]>/|)>> reduces to > additions in > and similarly for scalar multiplications with an element in >. A naive multiplication in |]>/|)>> requires |)>> operations in >, although this cost can be reduced to> using FFT techniques. The integration operator > sends +\+g*z|)> mod z> to +\+g/*z|)> mod z> and can be computed in linear time. Now assume that \/|)>|)>> is an approximate solution to() whose firstn> terms are correct. In other words, if is the actual solution and >\\> is a preimage of > with => mod z|)>>, then >-f=O|)>>. Given such an approximation, one iteration <\eqnarray*> >|>|\|)>>>>> of() yields a new approximation whose first terms are correct. Starting with \|)>>, we thus obtain a solution modulo > of() after at most iterations. The total cost of this computation is bounded by *n+s*n|)>>, or by *n*log n+s*n|)>> when using FFT-techniques. Since > is a polynomial, it is built from the components of using additions, multiplications, and scalar multiplications with constants in >. For sums, scalar products, general products, and integrals of power series, we may extract their coefficients in > using the following rules <\eqnarray*> |>||+h>>|*g|)>>||*g>>|>||*h+g*h+\+g*h>>|g|)>>>||*g.>>>> Applying this recursively to the polynomial expression >, the iteration() yields arecursion relation <\eqnarray*> >|>|*|)>>>>> that allows us to compute > from >, by induction on . Proceeding in this way, the computation of the solution +\+f*z> at order requires *n+s*n|)>> operations. The above approach of computing the coefficients ,f,\>> successively using the recursion relation() can be reformulated elegantly in the framework of lazy power series computations. The idea is to regard a power series \|]>> as given by its stream ,g,g,\> of coefficients. Basic arithmetic operations on power series are implemented in a manner: when computing the -th coefficient of a sum , a product , or an integralg>, we compute \Pjust the coefficients that are needed\Q from and . The natural way to do this is precisely to use the relations (\U). The lazy power series approach has the important property that the -th coefficient > of aproduct (say) becomes available as soon as ,\,g> and ,\,h> are given. This makes it possible to let and depend on the result , as long as the computation of > and > only involves previous coefficients ,\,> of . As a consequence, the fixed point equation() can be solved simply by evaluating the right hand side using lazy power series arithmetic. Indeed, the -th coefficient \|)>> only depends on previous coefficients ,\,f> of . The main drawback of the lazy power series approach is that the computation of a product at order using() requires |)>> operations. Here we recall that FFT techniques allow us to compute a product in /|)>> using only > operations. One essential observation is that, in order to solve () using the lazy power series approach, we only relied on the fact that each coefficient > becomes available as soon as ,\,g> and ,\,h>> are given. In fact, it is possible to design faster multiplication methods that still have this property; such methods are said to be or . A relaxed multiplication method that computes a product at order in time n|)>> was presented in and can be traced back to. An even faster algorithm of time complexity |)>>> was given in. Denoting by =n*>> the cost of relaxed multiplication at order , the resolution of() at order now requires only *+s*n|)>> operations. Yet another idea to speed up computations is to replace the iteration() with aNewton-style iteration with faster convergence. This idea was first used by Brent and Kung to show that() can be solved at order in time >. However, this complexity analysis does not take into account the dependence on and. In particular, the dependence on of Brent and Kung's method is exponential. Faster algorithms were proposed in, based on the simultaneous computation of the solution and its first variation. This allows for the computation of a solution of() at order in time +d|)>*d*n*log n+s*d*n|)>>. Whenever >, the computation time can be further reduced to +d|)>*n*log n+s*n|)>>. For small , this leads to the asymptotically most efficient method for solving(). For large , the computation of the first variation induces a > overhead, and the relaxed method usually becomes moreefficient. Let |)>=\|)>-\*\> as before. The aim of this section is to present a naive algorithm for the numerical integration of the differential equation <\eqnarray*> |\>>|||)>,>>>> based on the computation of truncated power series solutions at successive times =0\t\t,\>. We will use afixed expansion order , afixed bit precision 24>, but an adaptive step size -t,t-t,\>. At , we are given an initial condition =c\\> and our aim is to find a good numeric approximation for > at time 0>. The algorithm is not designed to be efficient when the equation gets stiff and we will present a heuristic discussion on what goes wrong when this happens. Let us write |t|\>=\\\|]>\\|]>> for the Taylor series expansion of> at time and|t|\>\\\\> for its truncation at order . In other words, setting <\equation*> \=\\deg g\n|}>, we have |t|\>\\> and <\eqnarray*> >|||t|\>+O|)>=f|t|\>+f|t|\>*z+\+f|t|\>*z+O|)>.>>>> If the time at which we expand > is clear from the context, then we will simply drop the postfix |t|\>> and write instead of |t|\>>. Conversely, for any other quantities that implicitly depend on , we will use the postfix |t|\>> to make this dependence explicit. So let |t|\>> be the power series expansion of > at a fixed time . In view of(), this power series satisfies the equation <\eqnarray*> f|\ z>>||.>>>> In the previous section, we have recalled various methods for computing truncated power series solutions > as a function of the initial condition =\=\|t|\>=\> at time. In what follows, we will use any of these algorithms as a black box, and show how to device a numerical integration scheme from that. Obviously, given > and an appropriate step size =\|t|\>> at time , the idea is to compute=f|t|\>> and simply evaluate <\equation*> \|)>\f|t|\>|)>=f|t|\>+f|t|\>*\+\+f|t|\>*\. We next continue with > in the role of and with a suitably adapted step size >. The main question is therefore to find a suitable step size >. Now the expected order of magnitude of |t|\>|)>> is given by <\eqnarray*> ||n> |\|>>*\.>>>> Since we are computing with bits of precision, we wish to keep the relative error of our numeric integration scheme below>, approximately. Therefore, we need to ensure that the truncation error <\eqnarray*> ||n>f*\|\|>>>>>> remains bounded by >. Although we have not computed any of the coefficients > for n>, areasonable approximate upper bound for is given by <\eqnarray*> >||\k\n> |\|>>*\,>>>> where \0> is a small positive integer, called the . In order to protect ourselves against the occasional vanishing of >, it is wise to take \1>. Nevertheless, a small value such as =2> or =3> should usually provide acceptable upper estimates for . We now simply take the step size > to be maximal with\M*2>>. This leads to the following algorithm for the numerical integrationof(): <\specified-algorithm> an initial condition \> and 0> a numerical approximation for >>, where >> satisfies() with >=c> <|specified-algorithm> \c>, 0> T> <\indent> Compute the truncated solution > to() with =\> Let > be maximal such that \M*2>, with and > as in() and() \min,T-t|)>> \f|)>>, t+\> > Let >> denote the exact solution of() with >=c> and let > denote the computed approximation by Algorithm. In order to analyze various aspects of the algorithm, it is instructive to look at the radius of convergence >> of >> at time . Roughly speaking, with the notations from the previous subsection, the coefficients >> of the power series expansion >=f>|t|\>> of > at time grow as <\equation*> >|\|>>\M*>|)>. Ideally speaking, if we were able to compute > with sufficient accuracy, then the coefficients > should grow in a similar way. Setting >> for the step size in this ideal situation, we would then expect that \E\>/\>|)>>\M*2>, whence <\equation> \>\\>*2. This suggests to take the expansion order to be proportional to, after which the step size >> should be proportional to the radius of convergence >>. Let us pursue this line of wishful thinking a little further. Theorem implies that>> is analytic on a compact half disk > that is independent of >. In particular, we get that >|t|\>\t> for R/2>. It can also be shown that the radius of convergence of >> at the origin is of the order >|0|\>\\> or more. For stiff equations, we typically have \>. In order to integrate the equation until a time R/2>, we thus hope for a step size that increases geometrically from > to >. The entire integration would then require approximately *log|)>> steps. Using the most efficient algorithms from section, each step requires > floating point operations (here the \Pflat Oh\Q notation |)>> stands for *|)>>|)>>). Since additions and multiplications of bit floating point numbers can be performed in time >, the overall bit cost of the entire integration would thus be bounded by *log|)>|)>>. But are we indeed able to compute > with enough accuracy in order to ensure that the coefficients > grow according to |\|>>\M*>|)>>? Let us carefully analyze each of the sources of error for one step of our integration scheme. By construction, we ensured the truncation error to be of the order |)>|)>|\|>>\M*2>. One of the most intrinsic sources of error comes from the initial condition >: since we are computing with a precision ofbits, the mere representation of > induces a relative error of the order of >. Even when computing > from > with infinite precision, this intrinsic source of error cannot be removed. Let us now study how the error in the initial condition affects the errors in the other coefficients >. For this, we need to investigate the first variation of>, which describes the sensitivity of the flow to the initial condition. More precisely, let |t\\|\>> be the power solution of() at time as a function of the initial condition |t\\|\>=\\\>. Then the first variation |t\\|\>> is the d> matrix with entries |t\\|\>>=\ f|t\\|\>>/\ \>>. Dropping |t\\|\>> suffixes when they are clear from the context, the first variation satisfies the linear differential equation <\eqnarray*> V|\ z>>||>-\|)>*V,V=Id.>>>> Here we recall that >=\ \/\ F> stands for the Jacobian matrix of >. If our equation is very stiff, then >> is small with respect to >, which leads to the extremely crude approximation \*z>> for the first variation. Now the relative error in the initial condition =f> is at best >, as explained above, which means that -\>|\|>>\M*2> for >=f>>. In combination with the relation >\V*-\>|)>\\*z>*-\>|)>>, this leads to the following errors for the other coefficients: <\eqnarray*> -f>|\|>>>|>||k!>*M*2.>>>> Now if \>|)>>, then the error -f>|\|>>\M*\*2/n!> dominates the actual value >|\|>>\M*>|)>>, which yields M**\|)>*2/n!> instead of M*/\>|)>*2>. When chosing our step size > such that M*2>, as in Algorithm, this yields *\|)>\n!> and <\eqnarray*> >|>||\>\*\>,>>>> instead of the desired step size \\>*2>. The actual bit cost of the complete integration is therefore bounded by |)>> instead of \ *log|)>|)>>. An interesting aspect of this analysis is the fact that the step size > still turns out to be> times larger than >, whence larger orders allow for larger step sizes. However, this comes at the expense of a larger relative error than >. Indeed, we have <\eqnarray*> *\-f>*\|\|>>>|>|*\|)>|k!>*M*2\|)>*M*2.>>>> This error is maximal for \n/\>, in which case we have <\eqnarray*> >*\>-f>>*\>|\|>>>|>|>*M*2.>>>> This means that the relative error for one step of our integration method is *log 2|)>-p>> instead of >. In other words, we have \Psacrificed\Q *log 2|)>> bits of precision, so that the method admits an \Peffective precision\Q of only *log 2|)>> bits. The last source of errors for Algorithm comes from rounding errors during the computation of > from >. The nature of these errors depends on the particular method that we used for computing the power series >. Nevertheless, for most methods, the rounding errors only contribute marginally to the total error. This is due to the fact that *\|\|>>*2\*\-f>*\|\|>>> for 0>, so the rounding errors are absorbed by the errors induced by the error in the initial condition >. Let us continue with the notations from section and its subsection. In particular, we assume that the exact solution >> to() with initial condition >=c> is analytic on the compact half disk >. We also denote by an upper bound for >|\|>>> on >, so that |t|\>>\t> and <\equation*> |t|\>>|\|>>\K*t for all R/2> and \>, by Cauchy's theorem. From now on, we assume that \>, so that Algorithm only allows us to use a step size of the order() instead of(). The aim of this section is to present a new way to compute truncated power series solutions of the equation <\eqnarray*> f|\ z>+\*f>||>>>> at time , with the property that > is a good approximation of the true truncated solution>>. In a similar way as in section, we will then use this to derive an algorithm for the numerical integration of(). Contrary to before, the property that |\|>>\>|\|>\K/t> for n> allows us to take step sizes of the desired order() for R/2>. We stress once more that the reduced step size for Algorithm is a consequence of our choice to compute the truncated solution > of() in terms of the initial condition =\>> (that can only be known approximately) and of the choice of the particular algorithm that is used for this computation. Indeed, as explained in section, only a small change in > of the order of \|\|>>\|\|>>*2\M*2> can have a dramatic effect on the solution >, since the coefficient > can change by as much as f|\|>>\M*2*\/n!>. Since >|\|>>\K/t\M*2*\/n!> for large \>, it follows that a minor change in > leads to a completely incorrect computation of the coefficients > with close to . For small , the error f|\|>>> generally does remain bounded by the actual value>|\|>>>. In the theory of stiff differential equations, it is customary to say that the system is in a for such . As soon as the error f|\|>>\M*2*\/n!> exceeds >|\|>>, we say that the system has reached its for the largest eigenvalue > of >. When this happens, the system may still be in a transient state for some of the other eigenvalues \\>. This motivates the definition of the \,n|}>>> as being the largest index such that we are in a transient state for >; we take =0>> if we reached the steady state for all eigenvalues >. The concepts of transient state, steady state, and critical index are deliberately somewhat vague. As a tentative definition, we might say that we reached the steady state for > if >|\|>>\M*2*\/n!>. However, for computational purposes, it is convenient to interpret this inequality as an approximate one. The crucial property of reaching the steady state for > is that the smallness of >|)>>|\|>>> essentially determines the -th component >> of the initial condition up to the last -th bit. The main idea behind our algorithm is to use this property as a \Pboundary condition\Q for the computation of >. More precisely, we boldly impose the boundary conditions +1|]>>=\=f>=0> as a replacement for the initial conditions +1|]>>=\+1|]>>,\,>=\>>>, while keeping the remaining initial conditions >=\>,\,f|]>>=\|]>>> for the transient states. In other words, we wish to find the truncated solution > of the system <\equation> f|\ z>+\*f=\,>=\>>|i\\>>|>=0>|i\\>>>>>. We will call >=0> the for > and() a . In order to solve(), it is natural to adapt the iterative method from section and introduce the operator :\\\> with <\eqnarray*> >>||>+>-\*g>|)>>|i\\>>|*>-\ g>|)>>|i\\>>>>>mod z.>>>> The actual arithmetic is performed over /|)>>, but it will be more convenient to view > as an operator from > into itself. The computation of > as a fixed point of> leaves us with two questions: how to chose a suitable for the iteration \> and how to determine the critical index>? For the , we go back to the solution |t|\>> from the previous step at time =t-\> and simply use |t|\>+z|)>> as a first approximation of the solution |t|\>> at time . For , we fall back to the traditional method from section. For the initial and steady-state conditions to \Ppropagate to the other end\Q, at least iterations are required in order to find a fixed point of >, whereas iterations usually suffice. One may thus take |t|\>\\|t|\>+z|)>|)>>. As to the critical index >, we determine it as a function of the step size > through <\equation> \=maxd\\*\\>|}>. This choice is meant to ensure the fastest convergence of the iteration \> and we used > as an approximation of >. The step size > itself is chosen in the same way as in section. Since > is not yet available for the computation of >, we may use the previous step size > as an approximation for > in(). Optionally, one may also wish to implement a few additional sanity checks in order to ensure convergence of the >iteration and decrease the step size in case of failure. One useful such check is to verify that |t|\>-f|t|\>|)>|\|>>\M*2>. Altogether, this leads to the following algorithm: <\specified-algorithm> an initial condition \> and 0> a numerical approximation for >>, where >> satisfies() with >=c> <|specified-algorithm> \c>, 0> Compute the truncated solution > to() with =\> T> <\indent> Let > be maximal such that \M*2>, with and > as in() and() \min,T-t|)>> \f|)>>, t+\> \maxd\\*\\>|}>> Replace > by the approximate fixed point +z|)>|)>> of > as in() > >-iteration> Let us now study the convergence of > as a mapping on the >-dimensional vector space> over >. We define a norm |\|\<\|\|\>>>> on this space by <\eqnarray*> |||g|\<\|\|\>>>>||n> \*|\|>>=maxi\d,k\n> \*>|\|>>>|>||>>>>|>>||>*\+1>>.>>>> For a linear map on this space, represented as a matrix , we have the corresponding matrix norm <\eqnarray*> |M|\<\|\|\>>>>|||g|\<\|\|\>>>=1> |M*g|\<\|\|\>>>.>>>> Recall that >> and >> stand for the Jacobian matrices of > and >. <\theorem> Assume that is a fixed point of > and that 0> and \n/\>> are constants such that >|)>|\|>>\A*\> for all ,d> and ,n-1>. Then <\eqnarray*> |J>|\<\|\|\>>>>|>|>|\>>+>>*>*\>|)>.>>>> <\proof> Consider an infinitesimal perturbation g> of with g|)>=\+\ \>. Given \> and k\n>, we have <\eqnarray*> * \|)>>|\|>>|||k>*>*\ g|)>>-\* g|)>>|\|>>>||>||k>*\\k-1>>>*\*g|)>>|\|>>+\* g|)>|\|>>|)>>>||>||k>*\\k-1>>>*|\ g|\<\|\|\>>>|\>>+\*|\ g|\<\|\|\>>>|\>|)>>>|||\\k-1>|k*\>*|\>*\>>+*\|k*\>|)>*|\ g|\<\|\|\>>>>>|||\k-1>>>*!||)>!*>*\|)>>>+|\>>|)>*|\ g|\<\|\|\>>>>>||>|\k-1>>>*>*\>|)>>+|\>>|)>*|\ g|\<\|\|\>>>>>||>|>>*>*\>|)>+>|\>>|)>*|\ g|\<\|\|\>>>>>>> Similarly, given \> and n>, we have <\eqnarray*> * \|)>>|\|>>|||\>*>*\ g|)>>-* g|)>>|\|>>>||>||\>*\k>>>*\*g|)>>|\|>>+* g|)>|\|>>|)>>>||>||\>*\k>>>*|\ g|\<\|\|\>>>|\>>+*|\ g|\<\|\|\>>>|\>|)>>>|||\k>>*|\>*\>>+*\|\*\>|)>*|\ g|\<\|\|\>>>>>|||\k>>*|)>!*>*\|)>>>+>|\>|)>*|\ g|\<\|\|\>>>>>||>|\k>>*>*\>|)>>+>|\>|)>*|\ g|\<\|\|\>>>>>||>|>>*>*\>|)>+>|\>>|)>*|\ g|\<\|\|\>>>>>>> Putting both relations together, we obtain |\ \|\<\|\|\>>>\>>*>*\>|)>+>|\>>|)>*|\ g|\<\|\|\>>>>. Assume that <\equation> >|\>>+>>*>*\>|)>\1. Then the theorem implies the existence a small neighbourhood of the fixed point of> on which the >iteration converges to . Whenever this condition is met, the >-iteration actually tends to converge on a rather large neighbourhood that includes our ; see the next subsection for a more detailed discussion. Intuitively speaking, the condition requires the eigenvalues > to be sufficiently separated with respect to the norm of the forcing term >. Even when the condition does not hold, the >iteration usually still displays an initial convergence for our , but the quality of the approximate solution to() ceases to improve after a while. If the condition() is satisfied for all critical indices> that we encounter when integrating from until time , then Algorithm should produce an accurate result. The idealized analysis from section then also applies, so the algorithm takes *log*T|)>|)>> steps. Since each step now requires |)>> floating point operations at bit precision, we finally obtain the bound *p*2*log*T|)>|)>> for the total running time. Let us now investigate in more detail why fixed points > of the >-iteration indeed approximate the true solution >> quite well. For this, we will determine a more precise small neighbourhood of the fixed point on which the >-iteration converges and show that this neighbourhood in particular contains >>. We start with two lemmas. <\lemma> Let =\>M*z> be a d> matrix of analytic functions defined on the closed ball |)>> with center zero and radius \0>. Let \\> |\|>>>. Then |\|>>\A/\> for all \>. <\proof> Given \> with >=1>, we have <\equation*> *v|\|>>=*\>*=\>*v|z>*\ z|\|>>\>*sup=\>*v|\|>>\>. It follows that |\|>>=sup>=1> v|\|>>\A/\>. <\lemma> Let \0>, \0>, 1>, \>, and <\eqnarray*> ||\\,|\|>\|1-C>*\*\>>> >+\|)>|\|>>.>>>> Then for all \\> with |-g|\<\|\|\>>>\|1-C>> and \>, we have <\eqnarray*> >|)>|\|>>>|>|>.>>>> <\proof> Setting g=-g>, we first notice that <\equation*> sup\\> g|\|>>\\> g|\|>>*\\\> g|\|>|\|>>|\>*\=\>*\>|)>|k!>* g|\|>|\|>>\|1-C>*\*\>>*|\ g|\<\|\|\>>>. It follows that <\equation*> sup\\> >|)>|\|>>\A and we conclude using the previous lemma. <\theorem> Let \n/\>> and 0> be such that <\equation*> C=>|\>>+>>*>*\>|)>\1. Let \> and =|\-g|\<\|\|\>>>> be such that <\eqnarray*> \\,|\|>\|1-C>*\*\>>>|>|>+\|)>|\|>>\A.>>>> Then the sequence ,\,\> tends to a unique fixed point on the set <\eqnarray*> >|1-C>|)>>||\\|h-g|\<\|\|\>>>\|1-C>|}>.>>>> <\proof> A straightforward adaptation of the proof of Theorem shows that |J>|\<\|\|\>>>\C> on the ball >|1-C>|)>>, which means that |\|)>-\|)>|\<\|\|\>>>\C*|h-h|\<\|\|\>>>> on this ball. By induction on \>, it follows that |\-\|\<\|\|\>>>\C*\> and |\-g|\<\|\|\>>>>\*+C|)>>>. We conclude that ,\,\> converges to a fixed point -g|)>>+|)>-\|)>>+\> of > in >|1-C>|)>>. Returning to the Taylor series expansion >> of the exact solution of() at time , we notice that <\eqnarray*> >>|)>>||>|)>>>|i\\>>|>|)>>+>*>|)>>*z>|i\\>>>>>.>>>> It follows that <\equation*> |\>|)>-f>|\<\|\|\>>>\>>*\*>=K*>*t|)>>\K**\>*\>|)>*|t>|)>\K*|t>|)>. Now assuming that the aimed step size \t*2> was more or less achieved at the previous step, it follows that \|\>|)>-f>|\<\|\|\>>>\K*2> is of the desired order. If the condition() is indeed satisfied for =t>, we thus should be able to apply Theorem and conclude that the computed fixed point > is within distance /> for the |\|\<\|\|\>>>> norm. Using a similar reasoning, we also see that the at the next step will be sufficiently close to the true solution for the >-iteration to converge. A more robust but costly approach to solve the steady-state problem() is to compute the initial values +1|]>>,\,f>> with sufficient precision using Newton's method. Given a tentative approximation =f> for the initial condition, we both compute =|t\\|\>>> and the first variation =V|t\\|\>>>, after which we update \\+\ \> by solving the linear system <\eqnarray*> +V*\ \>||\ |\>=\=\ \|]>>=0|\>.>>>> This method admits quadratic convergence, but it requires us to compute with a precision of *\/n|)>> bits at least in order to be accurate. Indeed, this comes from the fact that |\|>>> grows roughly as /n!>. On the upside, we may compute > and> using any of the algorithms from section. The total running time is therefore bounded by|)>*2*log*T|)>|)>>. Notice also that it actually suffices to compute the last > rows of>, due to the requirement that \>=\=\ \|]>>=0>. The main disadvantage of the above method is that the computation of the first variation > along with > induces an overhead of , which may be a problem for systems of high dimension. Let us now sketch how one might reduce this overhead by combining the variational approach with the >-iteration. We intend to return to a more detailed analysis in a future work. Instead of using a single critical index>, the first idea is to use a range |\>,\,|\>> of indices starting at |\>=\>, and such that the condition() holds for <\equation*> \>=|\>>*\|\>+1>>. The second idea is to only vary the components |\>+1|]>>,\,\|\>|]>>> of the initial condition and use the steady-state conditions for the indices |\>+1,\,d>. More specifically, for a tentative initial condition =\>, we first solve the steady-state problem <\equation*> f|\ z>+\*f=\,>=\>>|i\|\>>>|>=0>|i\|\>>>>>> using the >-iteration technique. In a similar way, we next solve the following variational steady-state problem for the |\>-|\>|)>> matrix : <\equation*> W|\ z>+\*W=J>*W,>=Id|\>|]>>>|i\|\>>>|>=0>|i\|\>>>>>>. As our , we may use >=Id|\>|]>>> for all . Having computed > and > at precision>, we finally update \\+\ \> by solving the linear system <\eqnarray*> >+W>* \|)>|\>+1|]>>+\+W|\>-|\>|]>>* \|)>|\>|]>>>||i=|\>+1,\,|\>>>>> and setting \>=\=\ \|\>|]>>=\ \|\>+1|]>>=\=\ \>=0.> We repeat this whole process until>> is sufficiently close to zero for |\>+1,\,|\>>. <\remark> The algorithms in this section are reminiscent of implicit numerical schemes for the integration of(). One interesting difference is that our second optimized method only needs to compute a small part of the full Jacobian matrix. <\remark> Instead of imposing the exact steady-state conditions +1|]>>=\=>=0>>, yet another approach would be to minimize the norm |\|>>> under the condition that -\|\|>>>\|\|>>*2>. This approach admits the advantages that one does not need to know> and that it might be applied to more general complex matrices >. However, it requires the computation of the full Jacobian matrix. In the previous two sections, we have described numerical algorithms for integrating(). An interesting question is how to compute a tight error bound such that the distance between the true and the computed solutions lies within this error bound. Ball arithmetic provides a suitable framework for such computations. This variant of interval arithmetic is well suited for high precision computations with complex numbers and we will briefly recall its basic principles in section. As a first application, we show how to make Theorem more effective. The certified integration of differential equations that are not stiff ( the robust counterpart of section) is a classical topic in interval arithmetic. For recent algorithms of good complexity, much in the spirit of the present paper, we refer to. Most of the existing algorithms rely on Taylor series expansions as we do, while providing rigorous tail bounds for the truncation error. The effective counterpart of Theorem provides suitable tail bounds in the case of stiff differential equations. In sections and, we show how to use this for the certified resolution of steady-state problems using the >-iteration from section. Unfortunately, our first algorithms are rather naive and typically lead to heavily overestimated error bounds. The classical way to reduce this overestimation is to also compute the first variation and apply the mean value theorem: see section for how to do this in our context. Given \> and \>\\:x\0|}>>, we write =\:\r|}>> for the with center and radius . The set of such balls is denoted by ,\|)>> or >>. One may lift the ring operations > in > to balls in >>, by setting: <\eqnarray*> \\>|>|b,r+s|)>>>|\\>|>|+r|)>*s+*r|)>.>>>> These formulas are simplest so as to satisfy the so called : given >\|}>>>, \> and \>, we have v\\\\>. This arithmetic for computing with balls is called . It extends to other operations that might be defined on >, as long as the ball lifts of operations satisfy the inclusion principle. Any ordinary complex number \> can be reinterpreted as a ball \\>>>. Given a ball \\>>, we also notice that |\|\>=+r> and |\|\>>=-r,0|)>>> provide us with reliable upper and lower bounds for |\|>> in >>. Another interesting operation on balls ,\\\>> that we will need below is intersection. Assuming that the set intersection \\\> is non-empty, we define the ball intersection >\\\>>> to be the ball of smallest radius that contains . In order to determine this ball intersection, we may assume without loss of generality that s>. If \\>, then >. Otherwise, let \>> be the two (possibly identical) intersections of the circles \> and \>. If /2|\|>>\>, then we still have >. Otherwise, /2,/2|)>>. It will also be convenient to extend vector notations to balls. First of all, we identify vectors of balls >,r>|)>,\,\>,r>|)>|)>\\,\|)>> with \Pball vectors\Q \\,\|)>>. Given \> and >\>|)>>, we also write v>> if and only if >\>|)>>> for ,d>. Similar remarks apply to ball matrices, ball series,etc. <\remark> For effective computations, one can only work with approximations of real and complex numbers of finite precision. The 754 norm provides a standard for real floating point arithmetic based on the concept of \Pcorrect rounding\Q. It naturally generalizes to floating point numbers with a mantissa of bits and an exponent of bits. Let> and> the sets of real and complex floating point numbers of this type (strictly speaking, one also has ,\|}>\\> for the reliable rounding of overflowing operations). One may adapt the above arithmetic on exact balls to floating point balls in ,\|)>> while preserving the inclusion principle: it suffices to slightly increase the radii of output balls so as to take care of rounding errors. For precise formulas and interesting variations, we referto. For simplicity, the sequel will be presented for exact ball arithmetic only, but it is not hard to adapt the results to take into account rounding errors. The polynomials>> in() can either be represented as linear combinations of monomials, or using adag(directed acyclic graph). In both cases, the ball arithmetic from the previous subsection allows us to reliably evaluate > at balls in >>. For the effective counterpart of Theorem, there is a trade-off between the qualities of the radius (larger radii being better) and the bound >|)>> (smaller values of being better). For a given >|)>>, it is simple to compute the \Pbest\Q corresponding that satisfies the condition(), by taking |B/\|)>|\>>>. Conversely, for a fixed \>>, let us recall a classical technique from interval analysis that can be used to compute an \Palmost best\Q corresponding bound >|)>> that satisfies(). We simply construct a sequence \B\B\\> in >|)>> by taking =0> and =|\|)>|)>*R|\>> for all . If there exists a finite for which() holds, then the sequence > usually converges to a minimal such quite quickly. After say ten iterations, we should therefore have a reasonable approximation. One then slightly inflates the last value by taking +-B|)>>. If |\|)>*R|\>\B>, then we have succeeded in finding asuitable . If not, then we return \Pfailed\Q. Using the above procedure, we may also compute a reasonably large compact half disk > on which > is analytic, together with a bound for |\|>>: we simply perform adichotomic search for the largest for which the computation of a corresponding does not fail. If we also wish the bound to remain reasonably small, one may simply divide by two at the end of the search and compute the corresponding . Let us now return to the integration of() and let >> be the exact solution for the initial condition >=c\\>. Let \>> and >|)>> be computed as above such that() holds. Assume that we were able to reliably integrate() until a given timeR/2> and let >=\>\>|)>>, so that >\\>>. In order to adapt the >-iteration to ball arithmetic, we first deduce from Theorem that >|t|\>>|\|>\B>/t> for +1,\,d>. This provides us with the required steady-state conditions +1|]>>=\+1|]>>/t|)>,\,f>=\>/t|)>>, in addition to the initial conditions >=\>,\,f|]>>=\|]>>>. The ball counterpart of our steady-state problem thus becomes <\equation> f>|\ z>+\*f>=\>|)>,>|)>>=>|)>>>|i\\>>|>|)>>=\>/t|)>>|i\\>>>>> We correspondingly define the ball version of > for \>> by <\eqnarray*> >>|)>>||>|)>>+>>|)>-\*>|)>>|)>>|i\\>>|*>>|)>-\ >|)>>-\>/t|)>*z|)>>|i\\>>>>>mod z.>>>> This map has the property that >\g>\f>\\>|)>>. In our ball context, we may actually iterate using the following improved version >> of >: <\eqnarray*> >>|)>>||>\\>|)>.>>>> Here we understand that the intersections are taken coefficientwise. Since iterations using>> can only improve our enclosures, we do not need to worry much about the . We may deduce a reasonably good ansatz from the power series expansion >|t|\>> at the previous time \t>, and the Cauchy bounds >|t|\>>|\|>\B>/|)>> for all n>>. But it is perfectly reasonable as well to use <\equation*> f>=\>+\*z+\+|)>>*z. Applying >> a sufficient number of times to this ansatz, we obtain the desired ball enclosure >=\>>|)>> of the truncated power series expansion of >> at time. Assuming that \t>, we may then deduce the enclosure <\equation> \>|)>=f>|)>+\>*|t>|)>|)> of >> at time >. Notice that the cost of the computation of certified solutions in this way is similar to the cost of Algorithm, up to a constant factor. The above method relies on the property that the steady-state problem() for balls specializes into a numerical steady-state problem that admits >> as a solution, since >\\>> and >\\|)>>. Given anumerical approximation of >>, as computed in section, a related problem is whether we can certify the existence of an actual solution in a small neighbourhood of the approximate solution. In order to make this more precise, let us introduce a few more notations. Given >,\,\|]>>\\> and +1|]>>,\,\>\\>, consider the numeric steady-state problem <\equation> f|\ z>+\*f=\,>=\>>|i\\>>|>=\>>|i\\>>>>>. We will denote by >|t\\,\|\>> any exact solution to this problem if such solution exists. It will be convenient to regard > and > as elements of >, through padding with zeros. Now consider the operator ,\>:\\\> defined by <\eqnarray*> ,\>>>||>+>-\*g>|)>>|i\\>>|*>-\ g>-n*\*z|)>>|i\\>>>>>mod z,>>>> Through the iterated application of ,\>> to a suitable , we may obtain a numerical approximation >|t\\,\|\>> of >|t\\,\|\>>. Now consider balls >|)>>,\,>|)>|]>>\\>> and >|)>+1|]>>,\,>|)>>\\>>, again padded with zeros. Then we have the following ball analogue of(): <\equation> f>|\ z>+\*f>=\>|)>,>|)>>=>|)>>>|i\\>>|>|)>>=>|)>>>|i\\>>>>>. Let > and > denote the centers of >> and >>. Starting from anumerical approximation >|t\\,\|\>> of >|t\\,\|\>>, we wish to compute the truncation >\>>>> of asolution to(), with the property that >|t\\,\|\>\f>> for all \\>> and \\>>. In order to do this, the idea is again to use a suitable ball version of ,\>>, given by <\eqnarray*> >,\>>>>|)>>||>|)>>+>>|)>-\*>|)>>|)>>|i\\>>|*>>|)>-\ >|)>>-n*>|)>>*z|)>>|i\\>>>>>mod z,>>>> and to keep applying >,\>>> on the >|t\\,\|\>> until we are sufficiently close to afixed point. Using a similar inflation technique as in section, we finally obtain atruncated series >> with >|)>\f>>, or \Pfailed\Q. Now for any \\>> and \\>>, we notice that this also yields ,\>>|)>\f>>. Thinking of >> as a compact set of series in>, this means in particular that ,\>> admits a fixed point >|t\\,\|\>\f>>, as desired. If =d>, then the algorithm from subsection specializes to a classical way to compute the enclosure() of a solution to() at time > as a function of the enclosure >=\,\|)>=f> at time. However, it is well known that this method suffers from alarge overestimation of the errors, due to the so-called \Pwrapping effect\Q. A well know technique to reduce this overestimation is to first compute a certified solution for the initial condition ,0|)>> instead of >> and then to bound the error due to this replacement, by investigating the first variation. Let us now show how to extend this technique to general critical indices >. With > in the role of >> and |)>> in the role of >>, we start with the computation of atruncated certified solution >\\>> to(). Recall that this ball solution has the property that >|t\\,\|\>\f>> for all \\>>. Since the initial conditions are exact, this solution is generally fairly accurate. Now for any \\>>, we may write <\eqnarray*> >|t\\,\|\>>||>|t\\,\|\>+|\ \> f>|t\\+-\|)>*\,\|\>*\ \.>>>> We next observe that <\eqnarray*> |\ \> f>|t\\+-\|)>*\,\|\>>||>|t\\+-\|)>*\,\|\>*-\|)>,>>>> where >|t\\,\|\>> denotes the exact solution of the following equation for the : <\equation*> V|\ z>+\*V=J>|t\\,\|\>|)>*V,>=Id>>|i\\>>|>=0>|i\\>>>>>. This equation again admits a ball analogue <\equation*> V>|\ z>+\*V>=J>>|)>*V>,>|)>>=Id>>|i\\>>|>|)>>=0>|i\\>>>>>, where >> is a solution to(). We only compute a crude solution >> to this equation, for a crude solution >> to(), both truncated to the order . These solutions can be obtained using the techniques from section. \ From() we finally conclude that <\eqnarray*> >>||>+V>*\|)>>>>> is a ball enclosure for >>. This enclosure is usually of a much better quality than >>. However, its computational cost is roughly times higher, due to the fact that we need to determine the first variation. In the case when +1>/\>> is too small for () to hold, we may generalize the above strategy and incorporate some of the ideas from section. With =|\>> and |\>> from there, let us briefly outline how to do this. We proceed in four stages: <\itemize> We first compute a truncated numeric solution >\\> to(), using the technique from section. Let >\>|)>> be such that >|)>>=>|)>>> for |\>> and >|)>>=0> for |\>>. Similarly, let >\>|)>> be such that >|)>>=0> for |\>> and >|)>>=>/t|)>>> for |\>>. We use >> as an for its deformation into a reliable truncated solution >\\>> to(), but with|\>> in the role of >, with >> in the role of >>, and >> in the role of >>. We further deform >> into a reliable truncated solution >\\>> to(), this time with |\>> in the role of >. We do this by computing >> as above, together with acrude but reliable solution to the equation <\equation*> W>|\ z>+\*W>=J>>|)>*W>,>|)>>=0>|i\|\>>>|>|)>>=Id>>||\>\i\|\>>>|>|)>>=0>|i\|\>>>>>>. We then take <\eqnarray*> >>||>+W>*\|)>,>>>> where |)>>=B>/t> for |\>\i\|\>> and |)>>=0> for |\>> and |\>>. We finally compute the desired truncated solution >\\>> to() as <\eqnarray*> >>||>+V>*\|)>,>>>> where >> is computed as above. There are several directions for generalizations and further improvements of the results in this paper. For simplicity, we assumed > to be a diagonal matrix with positive eigenvalues. It should not be hard to adapt the results to arbitrary matrices > with positive real eigenvalues only (as long as the forcing term > does not explode for the change of coordinates that puts > in Jordan normal form). A more interesting generalization would be to consider complex eigenvalues > with strictly positive real parts. In that case, the half disks> would need to be replaced by smaller compact sectors of the form >=\>\\R>,\\|}>> with\\/2>. Even more generally, one may investigate how to develop accurate Taylor series methods for integrating arbitrary differential equations with the property that the step size is proportional to the radius of convergence of the exactsolution; see also Remark. In this paper, we focussed on high precision computations using high order Taylor schemes. Another interesting question is whether it is possible to develop analogues of our methods in the same spirit as more traditional Runge\UKutta schemes. How would such analogues compare to implicit integration schemes? Concerning certified integration, the theory of Taylor models allows for higher order expansions of the flow |t\\|\>> as a function of the initial condition > (that is, beyond the computation of the first variation as in this paper). We think that it should be relatively straightforward to adapt our techniques to such higher order expansions. In a similar vein, we expect that the techniques from to further curb the wrapping effect can be adapted to the stiff setting. From a more practical point of view, it would finally be interesting to have more and better machine implementations. For the moment, we only did a toy implementation of the main algorithm from section in the system. We found this implementation to work as predicted on various simple examples. Detailed machine experiments with larger systems and the algorithms from sections and should make it possible to further improve the new techniques. <\bibliography|bib|tm-plain|> <\bib-list|26> A.Bostan, F.Chyzak, F.Ollivier, B.Salvy, É.SchostA.Sedoglavic. Fast computation of power series solutions of systems of differential equations. , 1012\U1021. New Orleans, Louisiana, U.S.A., January 2007. R.P.BrentH.T.Kung. Fast algorithms for manipulating formal power series. , 25:581\U595, 1978. R.P.BrentP.Zimmermann. . Cambridge University Press, 2010. J.C.Butcher. Numerical methods for ordinary differential equations in the 20th century. , 125:1\U29, 2000. J.-P.Eckmann, H.KochP.Wittwer. A computer-assisted proof of universality in area-preserving maps. , 47(289), 1984. M.J.FischerL.J.Stockmeyer. Fast on-line integer multiplication. , 9:67\U72, 1974. T.N.GambillR.D.Skeel. Logarithmic reduction of the wrapping effect with application to ordinary differential equations. , 25(1):153\U162, 1988. J.vander Hoeven. Relax, but don't be too lazy. , 34:479\U542, 2002. J.vander Hoeven. New algorithms for relaxed multiplication. , 42(8):792\U802, 2007. J.vander Hoeven. Newton's method and FFT trading. Symbolic Comput.>, 45(8):857\U878, 2010. J.vander Hoeven. Certifying trajectories of dynamical systems. I.Kotsireas, S.RumpC.Yap, , 520\U532. Cham, 2016. Springer International Publishing. J.vander Hoeven, G.Lecerf, B.Mourrain etal. Mathemagix. from 2002. . Jorisvander HoevenGrégoire Lecerf. Evaluating straight-line programs over balls. Paolo Montuschi, Michael Schulte, Javier Hormigo, Stuart ObermanNathalie Revol, , 142\U149. IEEE, 2016. W.Kühn. Rigourously computed orbits of dynamical systems without the wrapping effect. , 61:47\U67, 1998. O.E.Lanfort. A computer assisted proof of the Feigenbaum conjectures. , 6:427\U434, 1982. R.Lohner. . , Universität Karlsruhe, 1988. R.Lohner. On the ubiquity of the wrapping effect in the computation of error bounds. U.Kulisch, R.LohnerA.Facius, , 201\U217. Wien, New York, 2001. Springer. K.MakinoM.Berz. Remainder differential algebras and their applications. M.Berz, C.Bischof, G.CorlissA.Griewank, , 63\U74. SIAM, Philadelphia, 1996. K.MakinoM.Berz. Suppression of the wrapping effect by Taylor model-based validated integrators. MSU Report MSUHEP 40910, Michigan State University, 2004. R.E.Moore. Automatic local coordinate transformations to reduce the growth of error bounds in interval computation of solutions to ordinary differential equation. L.B.Rall, , 2, 103\U140. John Wiley, 1965. R.E.Moore. . Prentice Hall, Englewood Cliffs, N.J., 1966. A.Neumaier. The wrapping effect, ellipsoid arithmetic, stability and confedence regions. , 9:175\U190, 1993. A.Neumaier. Taylor forms - use and limits. , 9:43\U79, 2002. K.Nickel. How to fight the wrapping effect. Springer-Verlag, , 121\U132. 1985. W.H.Press, S.A.Teukolsky, W.T.VetterlingB.P.Flannery. . Cambridge University Press, 3rd, 2007. A.Sedoglavic. ; applications à l'étude des propriétés structurelles de systèmes différentiels algébriques en automatique>. , École polytechnique, 2001. <\initial> <\collection> <\attachments> <\collection> <\associate|bib-bibliography> <\db-entry|+9izXaIC09Kv5uK|book|PTVF07> <|db-entry> S. A. W. T. B. P. > <\db-entry|+QfXtZPU0IGdwf2|article|But00> <|db-entry> > <\db-entry|+63PGfIkmrfTqAB|book|Moo66> <|db-entry> > <\db-entry|+QfXtZPU0IGdwfQ|article|Lan82> <|db-entry> > <\db-entry|+8lDHqURSvmeWy1|article|EKW84> <|db-entry> H. P. > <\db-entry|+8lDHqURSvmeX20|phdthesis|Loh88> <|db-entry> > <\db-entry|+8lDHqURSvmeX2S|inproceedings|MB96> <|db-entry> M. > C. G. A. > <\db-entry|+8lDHqURSvmeX2T|techreport|MB04> <|db-entry> M. > <\db-entry|+TM239x3GmAFtp1|inproceedings|vdH:lagrange> <|db-entry> > S. C. > <\db-entry|+8lDHqURSvmeX87|article|vdH:relax> <|db-entry> > <\db-entry|+8lDHqURSvmeWyQ|article|FS74> <|db-entry> L. J. > <\db-entry|+8lDHqURSvmeX8U|article|vdH:newrelax> <|db-entry> > <\db-entry|+9izXaIC09Kv5rV|article|BK78> <|db-entry> H. T. > <\db-entry|+8lDHqURSvmeX5T|phdthesis|Sedo01> <|db-entry> > ; applications à l'étude des propriétés structurelles de systèmes différentiels algébriques en automatique> <\db-entry|+8lDHqURSvmeWuy|inproceedings|BCOSSS07> <|db-entry> F. F. B. É. A. > <\db-entry|+RbpjgzeC8oxQje|article|vdH:fnewton> <|db-entry> > Symbolic Comput.> <\db-entry|+9izXaIC09Kv5tz|incollection|Moo65> <|db-entry> > > <\db-entry|+8lDHqURSvmeX3K|inproceedings|Nick85> <|db-entry> > > <\db-entry|+9izXaIC09Kv5ss|article|GS88> <|db-entry> R. D. > <\db-entry|+8lDHqURSvmeX3G|article|Neu93> <|db-entry> > <\db-entry|+8lDHqURSvmeX1K|article|Kuhn98> <|db-entry> > <\db-entry|+8lDHqURSvmeX21|inproceedings|Loh01> <|db-entry> > R. A. > <\db-entry|+8lDHqURSvmeX3H|article|Neu02> <|db-entry> > <\db-entry|+9izXaIC09Kv5rs|book|BZ10> <|db-entry> P. > <\db-entry|+TM239x3GmAFtp6|inproceedings|vdH:dagball> <|db-entry> Grégoire > Michael Javier Stuart Nathalie > <\db-entry|+JJnwNKUifCppun|misc|vdH:mmx> <|db-entry> G. B. > > <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> PTVF07 But00 Moo66 Lan82 EKW84 Loh88 MB96 MB04 vdH:lagrange vdH:relax FS74 vdH:newrelax BK78 vdH:relax Sedo01 BCOSSS07 vdH:fnewton Moo65 Moo66 Lan82 EKW84 Nick85 Loh88 GS88 Neu93 MB96 Kuhn98 Loh01 Neu02 MB04 vdH:lagrange BZ10 vdH:dagball MB96 Neu02 MB04 GS88 vdH:lagrange vdH:mmx <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Analytic continuation> |.>>>>|> |2.1.Notations |.>>>>|> > |2.2.Reminders from complex analysis |.>>>>|> > |2.3.Analyticity on open half disks |.>>>>|> > |2.4.A refinement for larger compact half disks |.>>>>|> > |math-font-series||font-shape||3.Computing formal power series solutions> |.>>>>|> |Iterative method |.>>>>|> > |Recurrence relations |.>>>>|> > |The lazy power series approach |.>>>>|> > |The relaxed power series approach |.>>>>|> > |Newton's method |.>>>>|> > |math-font-series||font-shape||4.Numerical integration using Taylor expansions> |.>>>>|> |4.1.Naive integration using Taylor series |.>>>>|> > |4.2.Step size and error analysis |.>>>>|> > |math-font-series||font-shape||5.Fighting stiffness> |.>>>>|> |5.1.Integrating stiff equations using Taylor series |.>>>>|> > |5.2.Convergence of the |\>-iteration |.>>>>|> > |5.3.Quality of the computed solution |.>>>>|> > |5.4.Solving steady-state problems through the first variation |.>>>>|> > |math-font-series||font-shape||6.Certification> |.>>>>|> |6.1.Ball arithmetic |.>>>>|> > |6.2.Computing bounds on compact half disks |.>>>>|> > |6.3.Certified integration of stiff differential equations |.>>>>|> > |6.4.Certification of approximate solutions |.>>>>|> > |6.5.Curbing the wrapping effect |.>>>>|> > |6.6.Handling close eigenvalues |.>>>>|> > |math-font-series||font-shape||7.Conclusion> |.>>>>|> |math-font-series||font-shape||Bibliography> |.>>>>|>