> <\body> ||<\author-affiliation> Laboratoire d'informatique, UMR 7161 CNRS Campus de l'École polytechnique 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau |>>|>|> This paper concerns the reliable integration of dynamical systems with a focus on the computation of one specific trajectory for a given initial condition at high precision. We describe several algorithmic tricks which allow for faster parallel computations and better error estimates. We also introduce so called \PLagrange models\Q. These serve a similar purpose as the more classical Taylor models, but we will show that they allow for larger step sizes, especially when the truncation orders get large. ||> Let \\,\,F|]>> and consider the dynamical system <\eqnarray*> >||.>>>> Given an initial condition =I\\> at \>, a target point u> such that is analytic on>, the topic of this paper is to compute >. On actual computers, this problem can only be solved at finite precisions, although the user might request the precision to be as large as needed. One high level way to formalize this is to assume that numbers in the input ( the coefficients of > and , as well as and ) are and to request > again to be avector of computable complex numbers. From a more practical point of view, it is customary to perform the computations using . In our complex setting, we prefer to use a variant, called or . In our main problem, this means that we replace our input coefficients by complex balls, and that the output again to be a vector of balls. Throughout this paper, we assume that the reader is familiar with interval and ball arithmetic. We refer to for basic details on ball arithmetic. The website provides a lot of information on interval analysis. It will be convenient to denote balls using a bold font, \>>. The explicit compact ball with center and radius will be denoted by>>. Vector notation will also be used systematically. For instance, if \> and >|)>> with >=\:x\0|}>>, then =,r|)>,\,\,r|)>|)>>>. Sometimes, it is useful to obtain further information about the dependence of the value > on the initial conditions; this means that we are interested in the >, which satisfies the same differential equation() and the initial condition =I>. In particular, the f/\ I> is an important quantity, since it measures the sensitivity of the output on the initial conditions. If > denotes the condition number of , then it will typically be necessary to compute with a precision of at least \> bits in order to obtain any useful output. There is a vast literature on the reliable integration of dynamical systems. Until recently, most work focussed on low precision, allowing for efficient implementations using machine arithmetic. For practical purposes, it is also useful to have higher order information about the flow. are currently the most efficient device for performing this kind of computations. In this paper, we are interested in the time complexity of reliable integration of dynamical systems. We take a more theoretical perspective in which the working precision might become high. We are interested in certifying one particular trajectory, so we do not request any information about the flow beyond the first variation. From this complexity point of view it is important to stress that there is a tradeoff between efficiency and quality: faster algorithms can typically designed if we allow for larger radii in the output. Whenever one of these radii becomes infinite, then we say that the integration method : a ball with infinite radius no longer provides any useful information. Now some \Pradius swell\Q occurs structurally, as soon as the condition number of becomes large. But high quality integration methods should limit all other sources of precision loss. The outline of this paper is as follows: <\enumerate> For problems from reliable analysis it is usually best to perform certifications at the outermost level. In our case, this means that we first compute the entire numeric trajectory with asufficient precision, and only perform the certification at a second stage. We will see that this numeric computation is the only part of the method which is essentially sequential. The problem of certifying a complete trajectory contains a global and a local part. From the global point of view, we need to cut the trajectory in smaller pieces that can be certified by local means, and then devise a method to recombine the local certificates into a global one. For the local certification, we will introduce . As in the case of Taylor models, this approach is based on Taylor series expansions, but the more precise error estimates allow for larger time steps. The first idea has been applied to many problems from reliable computation (it is for instance known as Hansen's method in the case of matrix inversion). Nevertheless, we think that progress is often possible by applying this simple idea even more systematically. In Section, we will briefly recall some facts about the efficient numeric integration of(). In Section we discuss the way in which the problem of certying a global trajectory can be reduced to more local problems. It is interesting to compare this approach with the more classical stepwise certification scheme, along with the numerical integration itself. The problem with stepwise schemes is that they are essentially sequential and thereby give rise to a linear precision loss in the number of steps. The global approach reduces this to a logarithmic precision loss only. The global strategy already pays off in the linear case and it is possible to reorganize the computations in such a way that it can be re-incorporated into an interative scheme. Our current presentation has the merit of conceptual simplicity and ease of implementation. The main contribution of this paper concerns the introduction of \PLagrange models\Q and the way that such models allow for larger time steps. Classical Taylor models approximate analytic functions on the compact unit disk (say) by a polynomial \> of degree n> and an error \0> with the property that -P|\|>\\> for all \1>. The idea behind Lagrange models is to give amore precise meaning to the \Pbig Oh\Q in =f+\+f*z+O|)>>. More precisely, it consists of a polynomial \>> of degree n> with ball coefficients and an \0> such that \\+\|)>*z>. The advantage comes from the fact that the integration operator has norm for general analytic functions on the unit disk but only norm > for analytic functions that are divisible by >. Although Lagrange model arithmetic can be a constant times more expensive than Taylor model arithmetic, the more precise error estimates allow us to increase the time step. An earlier (non refereed) version of the material from Section appeared in the lecture notes. The algorithm from Section has been implemented in the package of the system, both for Taylor models and Lagrange models. For high precision computations, we indeed observed improved step sizes in the case of Lagrange models. Since the main focus of this paper is on certification, we will content ourselves with a quick survey of the most significant facts about numerical integration schemes. From a high level perspective, the integration problem between two times z> can be decomposed into two parts: finding suitable intermediate points \z\\\z\z=z> and the actual computation of |)>> as afunction of |)>> for ,s> (or as a function of |)>,\,f|)>> for some schemes). The optimal choice of intermediate points is determined by the distance to the closest singularities in the complex plane as well as the efficiency of the scheme that computes |)>> as afunction of |)>>. For >, let > be the convergence radius of the function at . High order integration schemes will enable us to take -z|\|>\c*\|)>> for some fixed constant 0>. Lower order schemes may force us to take smaller steps, but perform individual steps more efficiently. In some cases ( when admits many singularities just above the real axis, but none below), it may also be interesting to allow for intermediate points ,\,z> in > that keep alarger distance with the singularities of . For small working precisions, Runge-Kutta methods provide the most efficient schemes for numerical integration. For instance, the best Runge-Kutta method of order requires evaluations of for each step. For somewhat larger working precisions ( quadruple precision), higher order methods may be required in order to produce accurate results. One first alternative is to use relaxed power series computations which involve an overhead > for small orders and n> for large orders. For very large orders, a power series analogue of Newton's method provides the most efficient numerical integration method . This method actually computes the first variation of the solution along with the solution itself, which is very useful for the purpose of this paper. Another interesting question concerns the amount of computations that can be done in parallel. In principle, the integration process is essentially sequential (apart from some parallelism which may be possible at each individual step). Nevertheless, given a full numeric solution at asufficiently large precision , we claim that a solution at a roughly doubled solution can be computed in parallel. More precisely, for each , and , let > be the solution of() with =I>, and denote = f/\I|)>>. We regard > as the \Ptransitional flow\Q between and, assuming the initial condition at . Notice that =Id> and, for v\z>, <\eqnarray*> >|||)>>>|>|||)>*V.>>>> Now assume that we are given \f|)>> for ,s> and at precision . Then we may compute \V,z,f|)>|)>> in parallel at precision . Using adichotomic procedure of depth|log s|\>>, we will compute \f|)>> at precision in parallel for ,s>, together with \V,z,f|)>|)>> at precision . Assume that 2> and let |s/2|\>>. We start with the recursive computation of \f|)>> and \f,z,f|)>> at precision for ,m> ( ,s-m>), together with \V,z,f|)>|)>> and \V,z,f|)>|)>> at precision . Setting =f-f>, we have <\eqnarray*> |)>>|>|,z,f+\|)>>>||>|+V*\>>>> at precision (for ,s-m>) and <\eqnarray*> ,z,f|)>|)>>|>|*V>>>> at precision . It thus suffices to take \f+V*\> and \V*V>. The above algorithm suggests an interesting practical strategy for the integration of dynamical systems on massively parallel computers: the fastest processor(s) in the system plays the rôle of a\Pspearhead\Q and performs a low precision integration at top speed. The remaining processors are used for enhancing the precision as soon as a rough initial guess of the trajectory is known. The spearhead occasionally may have to redo some computations whenever the initial guess drifts too far away from the actual solution. The remaining processors might also compute other types of \Penhancements\Q, such as the first and higher order variations, or certifications of the trajectory. Nevertheless, the main bottleneck on a massively parallel computer seems to be the spearhead. A of the dynamical system() can be defined to be a ball function <\equation*> \:|)>\\|)> with the property that \\|)>> for any z> and \>. An additionally requires a ball function <\equation*> \:|)>\\|)> with the property that \\|)>> for any z> and \>. A l of() is a special kind of certified integrator which only produces meaningful results if and are sufficiently close (and in particular \\>). In other words, we allow the radii of the entries of |)>> to become infinite whenever this is not the case. Extended local certified integrators are defined similarly. One interesting problem is how to produce global (extended) certified integrators out of local ones. The most naive strategy for doing this goes as follows. Assume that we are given a local certified integrator >, as well as z> and >. If the radii of the entries of |)>> are \Psufficiently small\Q (finite, for instance, but we might require more precise answers), then we define |)>\\|)>>. Otherwise, we take /2> and define |)>>\|)>|)>>>. One may refine the strategy by including additional exception handling for breakdown situations. Unfortunately, it is classical that this naive strategy produces error estimates of extremely poor quality (due to the wrapping effect, and for several other reasons). A better strategy is to first compute a numerical solution to() together with its first variation and to certify this \Pextended solution\Q at a second stage. So assume that we are given a subdivision \\\z=z> and approximate values \f|)>,\,f\f|)>>, as well as \V|)>,\,V\V|)>>. We proceed as follows: We first produce reasonable candidate enclosures =\,z,\|)>,\,\=\,z,\|)>> with ,z,I|)>\\> for all ,s> and \>. Let > denote the center of =\>, > its radius, and let be the current working precision. For some large constant 1>, a good typical ansatz would be to take <\eqnarray*> >||+2*V*\,>>>> where <\eqnarray*> >||+K*2*|\|>|)>.>>>> At the very end, we will have to prove the correctness of the ansatz, thereby producing acertificate for the numerical trajectory. We compute =\,z,\|)>> for ,s> using an extended local integrator. Given j\k\s>, and assuming correctness of the ansatz enclosures ,\,\>, this provides us with a certified enclosure <\eqnarray*> >||*\*\>>>> for ,z,\|)>>. We compute =\,z,\,0|)>|)>> for ,s> using a local integrator. Given j\k\s>, and assuming correctness of the ansatz enclosures ,\,\>, this provides us with certified enclosures <\eqnarray*> >||+\*-f|)>>>|>||+\*-f|)>>>>> for ,z,\,0|)>|)>> and ,z,\|)>>. We finally check whether \\> for ,s>. If this is the case, then the correctness of the ansatz > follows by induction over . Otherwise, for each index with \\> we replace our ansatz > by a new one |~>> as follows: we write =f+\>, =f+\>, and take |~>\f+2*sup ,\|)>>. We next return to step 2 with this new ansatz. We return an error if no certificate is obtained after a suitable and fixed number of such iterations. <\remark> If we want to certify our trajectory with a high precision , then we clearly have to compute the enclosures > with precision . On the other hand, the enclosures > are only needed during the auxiliary computations() and() and it actually suffices to compute them with a much lower precision (which remains bounded if we let \>). For the initial ansatz, we essentially need this precision to be sufficiently large such that *\\2*V*\> for ,s>. In general, we rather must have +\*\\\>. The next issue concerns the efficient and high quality evaluation of the formulas() and(). The main potential problem already occurs in the case when > is constant, and() essentially reduces to the computation of the -th power > of a ball matrix >. Assuming standard ball matrix arithmetic, the naive iterative method <\eqnarray*> >||*\>>>> may lead to an exponential growth of the relative error as a function of . Here we understand the relative error of a ball matrix to be the norm of the matrix of radii divided by the norm of the matrix of centers. The bad exponential growth occurs for instance for the matrix <\eqnarray*> >|||>||>>>>>,>>>> which corresponds to the complex number >. The growth remains polynomial in in the case of triangular matrices >. When using binary powering <\eqnarray*> >||*\>>|>||*\*\,>>>> the growth of the relative error is systematically kept down to a polynomial in . For this reason, it is recommended to evaluate() and() using a similar dichotomic algorithm as in Section. More precisely, we will compute > and > using a parallel dichotomic algorithm for ,s>. Assuming that 2>, let |s/2|\>>. We start with the recursive computation of > and > for ,m>, as well as > and > for ,s-m>. Then we have <\eqnarray*> >||*\>>|>||+\*-f|)>>>>> for ,s-m>. Given the initial numerical trajectory, this shows that cost of the certification grows only with on sufficiently parallel computers. It is also interesting to notice that the parallel dichotomic technique that we used to double the precision uses very similar ingredients as the above way to certify trajectories. We found this analogy to apply on several other occasions, such as the computation of eigenvalues of matrices. This is probably also related to the similarity between ball arithmetic and arithmetic in jet spaces of order one. Let =\> be the compact disk of center zero and radius . A of order \> on> consists of a polynomial \> of degreen> together with an error \\>>. We will denote such aTaylor model by >|)>> and consider it as a balls of functions: given an analytic function on> and =P+\>|)>>, we write \> if |f-P|\<\|\|\>>>=sup\> -P|\|>\\>. Basic arithmetic on Taylor models works in a similar way as ball arithmetic. The ring operations are defined as follows: <\eqnarray*> >|)>|)>\>|)>|)>>||Q|)>+\>+\|)>>>|>|)>|)>\>|)>|)>>||Q|)>n>+\>>*\+>*\+\*\+Q|)>n>>>|)>.>>>> Given a polynomial \>, the product formula uses the notations <\eqnarray*> n>>||+\+A*z>>|n>>||*z+A*z+\,>>>> and >> denotes any upper bound for |A|\<\|\|\>>>> that is easy to compute. One may for instance take <\eqnarray*> >>|||\|>+\+|\|>.>>>> Now consider the operation > of integration from zero f|)>=f*\ u>. The integral of a Taylor model may computed using <\eqnarray*> >|)>|)>>||Pn-1>+\>|\|>/n+r*\|)>.>>>> This formula is justified by the mean value theorem. In practice, the numerical computations at a given working precision involve additional rounding errors. Bounds for these rounding errors have to be added to the errors in the above formulas. It is also easy to define Taylor models on disks > with general centers as being given by a Taylor model on > in the variable =z-c>. For more details, we refer to. A of order \> on > is a functional \Pball\Q of the form +\>|)>*z>, where \>> is a ball polynomial of degreen> and \\>> the so called . Given an analytic function on> and =\+\>|)>*z>, we write \> if \\> for all n> and |fn>*z|\<\|\|\>>>\\>. The name \PLagrange model\Q is motivated by Taylor\ULagrange's formula, which provides a careful estimate for the truncation error of a Taylor series expansion. We may also regard Lagrange models as away to substantiate the \Pbig Oh\Q term in the expansion +\+f*z+O|)>>. Basic arithmetic on Lagrange models works in a similar way as in the case of Taylor models: <\eqnarray*> +\>|)>*z|)>\+\>|)>*z|)>>||\\|)>+\>+\|)>*z>>|+\>|)>*z|)>\+\>|)>*z|)>>||\\|)>n>+\>>>*\+>>*\+\*\+\\|)>n>>>|)>*z.>>>> This time, we may take <\eqnarray*> >>>|||\|\>+\+|\>|\>.>>||\|)>|\>>||+\>>>> as the \Peasy to compute\Q upper bound of a ball polynomial \>>. The main advantage of Lagrange models with respect to Taylor models is that they allow for more precise tail bounds for integrals: <\eqnarray*> +\>|)>*z|)>>||\n-1>+\>|\|\>/n+r*\/|)>*z.>>>> Indeed, for any function on >, integration on a straight line segment from to any\> yields <\equation*> f*u*\ u|\|>=>|)>|n+1>*\ v|\|>\|f|\<\|\|\>>>|n+1>, whence ||)>|\<\|\|\>>>\r*|f|\<\|\|\>>>/>. The main disadvantage of Lagrange models with respect to Taylor models is that they require more data (individual error bounds for the coefficients of the polynomial) and that basic arithmetic is slightly more expensive. Indeed, arithmetic on ball polynomials is a constant time more expensive than ordinary polynomial arithmetic. Nevertheless, this constant tends to one if either or the working precision gets large. This makes Lagrange models particularly well suited for high precision computations, where they cause negligible overhead, but greatly improve the quality of tail bounds for integrals. For efficient implementations of basic arithmetic on ball polynomials, we refer to. Let us now return to the dynamical system(). We already mentioned relaxed power series computations and Newton's method as two efficient techniques for the numerical computation of power series solutions to differential equations. These methods can still be used for ball coefficients, modulo some preconditioning or suitable tweaking of the basic arithmetic on ball polynomials; see for more details. In order to obtain a local certified integrator in the sense of Section, it remains to be shown how to compute tail bounds for truncated power solutions at order . From now on, we will be interested in finding local certified solutions of() at the origin. We may rewrite the system() together with the initial condition =I> as a fixed point equation <\eqnarray*> || \.>>>> Now assume that a Lagrange model > satisfies <\eqnarray*> + \|)>>|>|.>>>> Then we claim that for any \> and any \>, the analytic function satisfies(). Indeed, the analytic functions with \> form a compact set, so the operator \\I+ \\\> admits afixed point for any \>. This fixed point is actually unique, since its coefficients can be computed uniquely by induction. Using ball power series computations we already know how to compute a ball polynomial > of degree n> such that <\eqnarray*> + \|)>>|>|+O|)>.>>>> Taking =\+\>|)>*z>, it remains to be shown how to compute \>|)>> in such a way that() is satisfied. Now denoting by the Jacobian matrix of >, and putting =\>|)>*z>, we have <\eqnarray*> |)>>|>||)>+J|)>*\.>>>> Writing +\>|)>*z> for + \|)>> and =\>|)>*z>, we thus have \\> and it suffices to find> such that <\eqnarray*> + J|)>*\>|>|.>>>> Assuming that all eigenvalues of |)>> are strictly bounded by /r>, it suffices to \Ptake\Q <\eqnarray*> >||||*J|)>|)>|\>|\>>*\.>>>> We have to be a little bit careful here, since |)>> depends on >. Nevertheless, the formula() provides a good ansatz: starting with >=0>, we may define <\eqnarray*> >>|>|||*J+\>>|)>*z|)>|)>|\>|\>>*\>>>> for all . If was chosen small enough, then this sequence quickly stabilizes. Assuming that >\\>>, we set =\>+2*>-\>|)>>, and check whether() holds. If so, then we have obtained the required Lagrange model solution of(). Otherwise, we will need to decrease , or increase and the working precision. Several remarks are in place about the method from the previous subsection. Let us first consider the important case when =\> is given exactly, and let denote the convergence radius of the unique solution of(). For large working precisions and expansion orders , we can make > arbitrarily small. Assuming that the eigenvalues of > are strictly bounded by /r>, this also implies that >,\>,\> become arbitrarily small, and that =\>+2*>-\>|)>> satisfies(). In other words, for any R>, there exists a sufficiently large (and working precision ) for which the method succeeds. Let us now investigate what happens if we apply the same method with Taylor models instead of Lagrange models. In that case, the equation() becomes <\eqnarray*> >|||)>|)>>>*\.>>>> On the one hand this implies that the method will break down as soon as reaches the largest eigenvalue of >, which may happen for R>. Even if is constant ( =\> reduces to the differential equation =J*f> for a constant matrix ), the step size cannot exceed the inverse of the maximal eigenvalue of . On the other hand, and still in the case when is constant, we see that Lagrange models allow us to take a step size which is times as large. In general, the gain will be smaller since usually admits larger eigenvalues on larger disks. Nevertheless, Lagrange models will systematically allow for larger step sizes. Notice that the matrices that we need to invert in() and() admit Lagrange model entries, which should really be regarded as functions. Ideally speaking, we would like to compute a uniform bound for the inverses of the evaluations of these matrices at all points in >. However, this may be computationally expensive. Usually, it is preferrable to replace each Taylor model entry +\>|)>*z> of the matrix to be inverted by a ball enclosure +\+\*|)>>+*r|)>>>. The resulting ball matrix can be inverted much faster, although the resulting error bounds may be of inferior quality. <\bibliography|bib|tm-plain|all.bib> <\bib-list|28> G.AlefeldJ.Herzberger. . Academic Press, New York, 1983. R.P.BrentH.T.Kung. Fast algorithms for manipulating formal power series. , 25:581\U595, 1978. T.N.GambillR.D.Skeel. Logarithmic reduction of the wrapping effect with application to ordinary differential equations. , 25(1):153\U162, 1988. J.vander Hoeven. Relax, but don't be too lazy. , 34:479\U542, 2002. J.vander Hoeven. Ball arithmetic. Arnold Beckmann, Christine GaÿnerBededikt Löwe, , 6Preprint-Reihe Mathematik, 179\U208. Ernst-Moritz-Arndt-Universität Greifswald, February 2010. International Workshop. J.vander Hoeven. Newton's method and FFT trading. , 45(8):857\U878, 2010. J.vander Hoeven. Faster relaxed multiplication. , 405\U412. Kobe, Japan, July 2014. J.vander Hoeven. , 2, Calcul analytique. CEDRAM, 2011. Exp. No. 4, 85 pages, . L.Jaulin, M.Kieffer, O.DidritE.Walter. . Springer, London, 2001. V.Kreinovich. Interval computations. . Useful information and references on interval computations. U.W.Kulisch. . 33Studies in Mathematics. De Gruyter, 2008. W.Kühn. Rigourously computed orbits of dynamical systems without the wrapping effect. , 61:47\U67, 1998. R.Lohner. . , Universität Karlsruhe, 1988. R.Lohner. On the ubiquity of the wrapping effect in the computation of error bounds. U.Kulisch, R.LohnerA.Facius, , 201\U217. Wien, New York, 2001. Springer. K.MakinoM.Berz. Remainder differential algebras and their applications. M.Berz, C.Bischof, G.CorlissA.Griewank, , 63\U74. SIAM, Philadelphia, 1996. K.MakinoM.Berz. Suppression of the wrapping effect by Taylor model-based validated integrators. MSU Report MSUHEP 40910, Michigan State University, 2004. R.E.Moore. Automatic local coordinate transformations to reduce the growth of error bounds in interval computation of solutions to ordinary differential equation. L.B.Rall, , 2, 103\U140. John Wiley, 1965. R.E.Moore. . Prentice Hall, Englewood Cliffs, N.J., 1966. R.E.Moore, R.B.KearfottM.J.Cloud. . SIAM Press, 2009. A.Neumaier. The wrapping effect, ellipsoid arithmetic, stability and confedence regions. , 9:175\U190, 1993. A.Neumaier. Taylor forms - use and limits. , 9:43\U79, 2002. K.Nickel. How to fight the wrapping effect. Springer-Verlag, , 121\U132. 1985. A.Neumaier. . Cambridge university press, Cambridge, 1990. W.H.Press, S.A.Teukolsky, W.T.VetterlingB.P.Flannery. . Cambridge University Press, 3rd, 2007. S.M.Rump. Verification methods: rigorous results using floating-point arithmetic. , 19:287\U449, 2010. A.Sedoglavic. ; applications à l'étude des propriétés structurelles de systèmes différentiels algébriques en automatique>. , École polytechnique, 2001. A.Turing. On computable numbers, with an application to the Entscheidungsproblem. , 2(42):230\U265, 1936. K.Weihrauch. . Springer-Verlag, Berlin/Heidelberg, 2000. <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> Tur36 Wei00 Moo66 AH83 Neu90 JKDW01 Kul08 MKC09 Rump10 vdH:ball:greifswald vdH:jncf Krei:web:interval Moo65 Moo66 Nick85 Loh88 GS88 Neu93 MB96 Kuhn98 Loh01 Neu02 MB04 MB96 MB04 GS88 vdH:jncf PTVF07 vdH:relax vdH:fastrelax BK78 Sedo01 vdH:fnewton MB96 MB04 vdH:ball:greifswald vdH:ball:greifswald <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Fast numerical integration> |.>>>>|> |2.1.Classical algorithms |.>>>>|> > |2.2.Parallelism |.>>>>|> > |math-font-series||font-shape||3.Global certification> |.>>>>|> |3.1.From local to global certification |.>>>>|> > |3.2.Certifying a numerical trajectory |.>>>>|> > |Stage 1 |.>>>>|> > |Stage 2 |.>>>>|> > |Stage 3 |.>>>>|> > |Stage 4 |.>>>>|> > |3.3.Algorithmic considerations and parallelism |.>>>>|> > |math-font-series||font-shape||4.Lagrange models> |.>>>>|> |4.1.Taylor models |.>>>>|> > |4.2.Lagrange models |.>>>>|> > |4.3.Reliable integration of dynamical systems |.>>>>|> > |4.4.Discussion |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|>