Certifying trajectories of dynamical systems

Joris van der Hoeven

Laboratoire d'informatique, UMR 7161 CNRS

Campus de l'École polytechnique

1, rue Honoré d'Estienne d'Orves

Bâtiment Alan Turing, CS35003

91120 Palaiseau

Email: vdhoeven@lix.polytechnique.fr

November 3, 2023

This paper concerns the reliable integration of dynamical systems with a focus on the computation of one specific trajectory for a given initial condition at high precision. We describe several algorithmic tricks which allow for faster parallel computations and better error estimates. We also introduce so called “Lagrange models”. These serve a similar purpose as the more classical Taylor models, but we will show that they allow for larger step sizes, especially when the truncation orders get large.

Keywords: reliable computation, dynamical systems, certified integration, ball arithmetic, Taylor models, multiple precision computations

A.M.S. subject classification: 65G20, 37-04

1.Introduction

Let and consider the dynamical system

(1)

Given an initial condition at , a target point such that is analytic on , the topic of this paper is to compute .

On actual computers, this problem can only be solved at finite precisions, although the user might request the precision to be as large as needed. One high level way to formalize this is to assume that numbers in the input (i.e. the coefficients of and , as well as and ) are computable [27, 28] and to request again to be a vector of computable complex numbers.

From a more practical point of view, it is customary to perform the computations using interval arithmetic [18, 1, 23, 9, 11, 19, 25]. In our complex setting, we prefer to use a variant, called ball arithmetic or midpoint-radius arithmetic. In our main problem, this means that we replace our input coefficients by complex balls, and that the output again to be a vector of balls.

Throughout this paper, we assume that the reader is familiar with interval and ball arithmetic. We refer to [5, 8] for basic details on ball arithmetic. The website [10] provides a lot of information on interval analysis.

It will be convenient to denote balls using a bold font, e.g. . The explicit compact ball with center and radius will be denoted by . Vector notation will also be used systematically. For instance, if and with , then .

Sometimes, it is useful to obtain further information about the dependence of the value on the initial conditions; this means that we are interested in the flow , which satisfies the same differential equation (1) and the initial condition . In particular, the first variation is an important quantity, since it measures the sensitivity of the output on the initial conditions. If denotes the condition number of , then it will typically be necessary to compute with a precision of at least bits in order to obtain any useful output.

There is a vast literature on the reliable integration of dynamical systems [17, 18, 22, 13, 3, 20, 15, 12, 14, 21, 16]. Until recently, most work focussed on low precision, allowing for efficient implementations using machine arithmetic. For practical purposes, it is also useful to have higher order information about the flow. Taylor models are currently the most efficient device for performing this kind of computations [15, 16].

In this paper, we are interested in the time complexity of reliable integration of dynamical systems. We take a more theoretical perspective in which the working precision might become high. We are interested in certifying one particular trajectory, so we do not request any information about the flow beyond the first variation.

From this complexity point of view it is important to stress that there is a tradeoff between efficiency and quality: faster algorithms can typically designed if we allow for larger radii in the output. Whenever one of these radii becomes infinite, then we say that the integration method breaks down: a ball with infinite radius no longer provides any useful information. Now some “radius swell” occurs structurally, as soon as the condition number of becomes large. But high quality integration methods should limit all other sources of precision loss.

The outline of this paper is as follows:

  1. For problems from reliable analysis it is usually best to perform certifications at the outermost level. In our case, this means that we first compute the entire numeric trajectory with a sufficient precision, and only perform the certification at a second stage. We will see that this numeric computation is the only part of the method which is essentially sequential.

  2. The problem of certifying a complete trajectory contains a global and a local part. From the global point of view, we need to cut the trajectory in smaller pieces that can be certified by local means, and then devise a method to recombine the local certificates into a global one.

  3. For the local certification, we will introduce Lagrange models. As in the case of Taylor models, this approach is based on Taylor series expansions, but the more precise error estimates allow for larger time steps.

The first idea has been applied to many problems from reliable computation (it is for instance known as Hansen's method in the case of matrix inversion). Nevertheless, we think that progress is often possible by applying this simple idea even more systematically. In Section 2, we will briefly recall some facts about the efficient numeric integration of (1).

In Section 3 we discuss the way in which the problem of certying a global trajectory can be reduced to more local problems. It is interesting to compare this approach with the more classical stepwise certification scheme, along with the numerical integration itself. The problem with stepwise schemes is that they are essentially sequential and thereby give rise to a linear precision loss in the number of steps. The global approach reduces this to a logarithmic precision loss only. The global strategy already pays off in the linear case [3] and it is possible to reorganize the computations in such a way that it can be re-incorporated into an interative scheme. Our current presentation has the merit of conceptual simplicity and ease of implementation.

The main contribution of this paper concerns the introduction of “Lagrange models” and the way that such models allow for larger time steps. Classical Taylor models approximate analytic functions on the compact unit disk (say) by a polynomial of degree and an error with the property that for all . The idea behind Lagrange models is to give a more precise meaning to the “big Oh” in . More precisely, it consists of a polynomial of degree with ball coefficients and an such that . The advantage comes from the fact that the integration operator has norm for general analytic functions on the unit disk but only norm for analytic functions that are divisible by . Although Lagrange model arithmetic can be a constant times more expensive than Taylor model arithmetic, the more precise error estimates allow us to increase the time step. An earlier (non refereed) version of the material from Section 4 appeared in the lecture notes [8].

The algorithm from Section 4 has been implemented in the continewz package of the Mathemagix system, both for Taylor models and Lagrange models. For high precision computations, we indeed observed improved step sizes in the case of Lagrange models.

2.Fast numerical integration

Since the main focus of this paper is on certification, we will content ourselves with a quick survey of the most significant facts about numerical integration schemes.

2.1.Classical algorithms

From a high level perspective, the integration problem between two times can be decomposed into two parts: finding suitable intermediate points and the actual computation of as a function of for (or as a function of for some schemes).

The optimal choice of intermediate points is determined by the distance to the closest singularities in the complex plane as well as the efficiency of the scheme that computes as a function of . For , let be the convergence radius of the function at . High order integration schemes will enable us to take for some fixed constant . Lower order schemes may force us to take smaller steps, but perform individual steps more efficiently. In some cases (e.g. when admits many singularities just above the real axis, but none below), it may also be interesting to allow for intermediate points in that keep a larger distance with the singularities of .

For small working precisions, Runge-Kutta methods [24] provide the most efficient schemes for numerical integration. For instance, the best Runge-Kutta method of order requires evaluations of for each step. For somewhat larger working precisions (e.g. quadruple precision), higher order methods may be required in order to produce accurate results. One first alternative is to use relaxed power series computations [4, 7] which involve an overhead for small orders and for large orders. For very large orders, a power series analogue of Newton's method provides the most efficient numerical integration method [2, 26, 6]. This method actually computes the first variation of the solution along with the solution itself, which is very useful for the purpose of this paper.

2.2.Parallelism

Another interesting question concerns the amount of computations that can be done in parallel. In principle, the integration process is essentially sequential (apart from some parallelism which may be possible at each individual step). Nevertheless, given a full numeric solution at a sufficiently large precision , we claim that a solution at a roughly doubled solution can be computed in parallel.

More precisely, for each , and , let be the solution of (1) with , and denote . We regard as the “transitional flow” between and , assuming the initial condition at . Notice that and, for ,

Now assume that we are given for and at precision . Then we may compute in parallel at precision . Using a dichotomic procedure of depth , we will compute at precision in parallel for , together with at precision .

Assume that and let . We start with the recursive computation of and at precision for (resp. ), together with and at precision . Setting , we have

at precision (for ) and

at precision . It thus suffices to take and .

The above algorithm suggests an interesting practical strategy for the integration of dynamical systems on massively parallel computers: the fastest processor(s) in the system plays the rôle of a “spearhead” and performs a low precision integration at top speed. The remaining processors are used for enhancing the precision as soon as a rough initial guess of the trajectory is known. The spearhead occasionally may have to redo some computations whenever the initial guess drifts too far away from the actual solution. The remaining processors might also compute other types of “enhancements”, such as the first and higher order variations, or certifications of the trajectory. Nevertheless, the main bottleneck on a massively parallel computer seems to be the spearhead.

3.Global certification

3.1.From local to global certification

A certified integrator of the dynamical system (1) can be defined to be a ball function

with the property that for any and . An extended certified integrator additionally requires a ball function

with the property that for any and .

A local certified integrator of (1) is a special kind of certified integrator which only produces meaningful results if and are sufficiently close (and in particular ). In other words, we allow the radii of the entries of to become infinite whenever this is not the case. Extended local certified integrators are defined similarly.

One interesting problem is how to produce global (extended) certified integrators out of local ones. The most naive strategy for doing this goes as follows. Assume that we are given a local certified integrator , as well as and . If the radii of the entries of are “sufficiently small” (finite, for instance, but we might require more precise answers), then we define . Otherwise, we take and define . One may refine the strategy by including additional exception handling for breakdown situations. Unfortunately, it is classical that this naive strategy produces error estimates of extremely poor quality (due to the wrapping effect, and for several other reasons).

3.2.Certifying a numerical trajectory

A better strategy is to first compute a numerical solution to (1) together with its first variation and to certify this “extended solution” at a second stage. So assume that we are given a subdivision and approximate values , as well as . We proceed as follows:

Stage 1
We first produce reasonable candidate enclosures with for all and . Let denote the center of , its radius, and let be the current working precision. For some large constant , a good typical ansatz would be to take

where

At the very end, we will have to prove the correctness of the ansatz, thereby producing a certificate for the numerical trajectory.

Stage 2
We compute for using an extended local integrator. Given , and assuming correctness of the ansatz enclosures , this provides us with a certified enclosure

(2)

for .

Stage 3
We compute for using a local integrator. Given , and assuming correctness of the ansatz enclosures , this provides us with certified enclosures

(3)
(4)

for and .

Stage 4
We finally check whether for . If this is the case, then the correctness of the ansatz follows by induction over . Otherwise, for each index with we replace our ansatz by a new one as follows: we write , , and take . We next return to step 2 with this new ansatz. We return an error if no certificate is obtained after a suitable and fixed number of such iterations.

Remark 1. If we want to certify our trajectory with a high precision , then we clearly have to compute the enclosures with precision . On the other hand, the enclosures are only needed during the auxiliary computations (3) and (4) and it actually suffices to compute them with a much lower precision (which remains bounded if we let ). For the initial ansatz, we essentially need this precision to be sufficiently large such that for . In general, we rather must have .

3.3.Algorithmic considerations and parallelism

The next issue concerns the efficient and high quality evaluation of the formulas (2) and (3). The main potential problem already occurs in the case when is constant, and (2) essentially reduces to the computation of the -th power of a ball matrix . Assuming standard ball matrix arithmetic, the naive iterative method

may lead to an exponential growth of the relative error as a function of . Here we understand the relative error of a ball matrix to be the norm of the matrix of radii divided by the norm of the matrix of centers. The bad exponential growth occurs for instance for the matrix

which corresponds to the complex number . The growth remains polynomial in in the case of triangular matrices . When using binary powering

the growth of the relative error is systematically kept down to a polynomial in .

For this reason, it is recommended to evaluate (2) and (3) using a similar dichotomic algorithm as in Section 2.2. More precisely, we will compute and using a parallel dichotomic algorithm for . Assuming that , let . We start with the recursive computation of and for , as well as and for . Then we have

for . Given the initial numerical trajectory, this shows that cost of the certification grows only with on sufficiently parallel computers.

It is also interesting to notice that the parallel dichotomic technique that we used to double the precision uses very similar ingredients as the above way to certify trajectories. We found this analogy to apply on several other occasions, such as the computation of eigenvalues of matrices. This is probably also related to the similarity between ball arithmetic and arithmetic in jet spaces of order one.

4.Lagrange models

4.1.Taylor models

Let be the compact disk of center zero and radius . A Taylor model of order on consists of a polynomial of degree together with an error . We will denote such a Taylor model by and consider it as a balls of functions: given an analytic function on and , we write if .

Basic arithmetic on Taylor models works in a similar way as ball arithmetic. The ring operations are defined as follows:

Given a polynomial , the product formula uses the notations

and denotes any upper bound for that is easy to compute. One may for instance take

Now consider the operation of integration from zero . The integral of a Taylor model may computed using

This formula is justified by the mean value theorem.

In practice, the numerical computations at a given working precision involve additional rounding errors. Bounds for these rounding errors have to be added to the errors in the above formulas. It is also easy to define Taylor models on disks with general centers as being given by a Taylor model on in the variable . For more details, we refer to [15, 16].

4.2.Lagrange models

A Lagrange model of order on is a functional “ball” of the form , where is a ball polynomial of degree and the so called tail bound. Given an analytic function on and , we write if for all and . The name “Lagrange model” is motivated by Taylor–Lagrange's formula, which provides a careful estimate for the truncation error of a Taylor series expansion. We may also regard Lagrange models as a way to substantiate the “big Oh” term in the expansion .

Basic arithmetic on Lagrange models works in a similar way as in the case of Taylor models:

This time, we may take

as the “easy to compute” upper bound of a ball polynomial . The main advantage of Lagrange models with respect to Taylor models is that they allow for more precise tail bounds for integrals:

Indeed, for any function on , integration on a straight line segment from to any yields

whence .

The main disadvantage of Lagrange models with respect to Taylor models is that they require more data (individual error bounds for the coefficients of the polynomial) and that basic arithmetic is slightly more expensive. Indeed, arithmetic on ball polynomials is a constant time more expensive than ordinary polynomial arithmetic. Nevertheless, this constant tends to one if either or the working precision gets large. This makes Lagrange models particularly well suited for high precision computations, where they cause negligible overhead, but greatly improve the quality of tail bounds for integrals. For efficient implementations of basic arithmetic on ball polynomials, we refer to [5].

4.3.Reliable integration of dynamical systems

Let us now return to the dynamical system (1). We already mentioned relaxed power series computations and Newton's method as two efficient techniques for the numerical computation of power series solutions to differential equations. These methods can still be used for ball coefficients, modulo some preconditioning or suitable tweaking of the basic arithmetic on ball polynomials; see [5] for more details. In order to obtain a local certified integrator in the sense of Section 3.1, it remains to be shown how to compute tail bounds for truncated power solutions at order .

From now on, we will be interested in finding local certified solutions of (1) at the origin. We may rewrite the system (1) together with the initial condition as a fixed point equation

(5)

Now assume that a Lagrange model satisfies

(6)

Then we claim that for any and any , the analytic function satisfies (5). Indeed, the analytic functions with form a compact set, so the operator admits a fixed point for any . This fixed point is actually unique, since its coefficients can be computed uniquely by induction.

Using ball power series computations we already know how to compute a ball polynomial of degree such that

Taking , it remains to be shown how to compute in such a way that (6) is satisfied. Now denoting by the Jacobian matrix of , and putting , we have

Writing for and , we thus have and it suffices to find such that

(7)

Assuming that all eigenvalues of are strictly bounded by , it suffices to “take”

(8)

We have to be a little bit careful here, since depends on . Nevertheless, the formula (8) provides a good ansatz: starting with , we may define

(9)

for all . If was chosen small enough, then this sequence quickly stabilizes. Assuming that , we set , and check whether (7) holds. If so, then we have obtained the required Lagrange model solution of (6). Otherwise, we will need to decrease , or increase and the working precision.

4.4.Discussion

Several remarks are in place about the method from the previous subsection. Let us first consider the important case when is given exactly, and let denote the convergence radius of the unique solution of (5). For large working precisions and expansion orders , we can make arbitrarily small. Assuming that the eigenvalues of are strictly bounded by , this also implies that become arbitrarily small, and that satisfies (7). In other words, for any , there exists a sufficiently large (and working precision ) for which the method succeeds.

Let us now investigate what happens if we apply the same method with Taylor models instead of Lagrange models. In that case, the equation (8) becomes

On the one hand this implies that the method will break down as soon as reaches the largest eigenvalue of , which may happen for . Even if is constant (i.e. reduces to the differential equation for a constant matrix ), the step size cannot exceed the inverse of the maximal eigenvalue of . On the other hand, and still in the case when is constant, we see that Lagrange models allow us to take a step size which is times as large. In general, the gain will be smaller since usually admits larger eigenvalues on larger disks. Nevertheless, Lagrange models will systematically allow for larger step sizes.

Notice that the matrices that we need to invert in (8) and (9) admit Lagrange model entries, which should really be regarded as functions. Ideally speaking, we would like to compute a uniform bound for the inverses of the evaluations of these matrices at all points in . However, this may be computationally expensive. Usually, it is preferrable to replace each Taylor model entry of the matrix to be inverted by a ball enclosure . The resulting ball matrix can be inverted much faster, although the resulting error bounds may be of inferior quality.

Bibliography

[1]

G. Alefeld and J. Herzberger. Introduction to interval analysis. Academic Press, New York, 1983.

[2]

R.P. Brent and H.T. Kung. Fast algorithms for manipulating formal power series. Journal of the ACM, 25:581–595, 1978.

[3]

T.N. Gambill and R.D. Skeel. Logarithmic reduction of the wrapping effect with application to ordinary differential equations. SIAM J. Numer. Anal., 25(1):153–162, 1988.

[4]

J. van der Hoeven. Relax, but don't be too lazy. JSC, 34:479–542, 2002.

[5]

J. van der Hoeven. Ball arithmetic. In Arnold Beckmann, Christine Gaßner, and Bededikt Löwe, editors, Logical approaches to Barriers in Computing and Complexity, number 6 in Preprint-Reihe Mathematik, pages 179–208. Ernst-Moritz-Arndt-Universität Greifswald, February 2010. International Workshop.

[6]

J. van der Hoeven. Newton's method and FFT trading. JSC, 45(8):857–878, 2010.

[7]

J. van der Hoeven. Faster relaxed multiplication. In Proc. ISSAC '14, pages 405–412. Kobe, Japan, July 2014.

[8]

J. van der Hoeven. Journées Nationales de Calcul Formel (2011), volume 2 of Les cours du CIRM, chapter Calcul analytique. CEDRAM, 2011. Exp. No. 4, 85 pages, http://ccirm.cedram.org/ccirm-bin/fitem?id=CCIRM_2011__2_1_A4_0.

[9]

L. Jaulin, M. Kieffer, O. Didrit, and E. Walter. Applied interval analysis. Springer, London, 2001.

[10]

V. Kreinovich. Interval computations. http://www.cs.utep.edu/interval-comp/. Useful information and references on interval computations.

[11]

U.W. Kulisch. Computer Arithmetic and Validity. Theory, Implementation, and Applications. Number 33 in Studies in Mathematics. De Gruyter, 2008.

[12]

W. Kühn. Rigourously computed orbits of dynamical systems without the wrapping effect. Computing, 61:47–67, 1998.

[13]

R. Lohner. Einschließung der Lösung gewöhnlicher Anfangs- und Randwertaufgaben und Anwendugen. PhD thesis, Universität Karlsruhe, 1988.

[14]

R. Lohner. On the ubiquity of the wrapping effect in the computation of error bounds. In U. Kulisch, R. Lohner, and A. Facius, editors, Perspectives on enclosure methods, pages 201–217. Wien, New York, 2001. Springer.

[15]

K. Makino and M. Berz. Remainder differential algebras and their applications. In M. Berz, C. Bischof, G. Corliss, and A. Griewank, editors, Computational differentiation: techniques, applications and tools, pages 63–74. SIAM, Philadelphia, 1996.

[16]

K. Makino and M. Berz. Suppression of the wrapping effect by Taylor model-based validated integrators. Technical Report MSU Report MSUHEP 40910, Michigan State University, 2004.

[17]

R.E. Moore. Automatic local coordinate transformations to reduce the growth of error bounds in interval computation of solutions to ordinary differential equation. In L.B. Rall, editor, Error in Digital Computation, volume 2, pages 103–140. John Wiley, 1965.

[18]

R.E. Moore. Interval Analysis. Prentice Hall, Englewood Cliffs, N.J., 1966.

[19]

R.E. Moore, R.B. Kearfott, and M.J. Cloud. Introduction to Interval Analysis. SIAM Press, 2009.

[20]

A. Neumaier. The wrapping effect, ellipsoid arithmetic, stability and confedence regions. Computing Supplementum, 9:175–190, 1993.

[21]

A. Neumaier. Taylor forms - use and limits. Reliable Computing, 9:43–79, 2002.

[22]

K. Nickel. How to fight the wrapping effect. In Springer-Verlag, editor, Proc. of the Intern. Symp. on interval mathematics, pages 121–132. 1985.

[23]

A. Neumaier. Interval methods for systems of equations. Cambridge university press, Cambridge, 1990.

[24]

W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical recipes, the art of scientific computing. Cambridge University Press, 3rd edition, 2007.

[25]

S.M. Rump. Verification methods: rigorous results using floating-point arithmetic. Acta Numerica, 19:287–449, 2010.

[26]

A. Sedoglavic. Méthodes seminumériques en algèbre différentielle ; applications à l'étude des propriétés structurelles de systèmes différentiels algébriques en automatique. PhD thesis, École polytechnique, 2001.

[27]

A. Turing. On computable numbers, with an application to the Entscheidungsproblem. Proc. London Maths. Soc., 2(42):230–265, 1936.

[28]

K. Weihrauch. Computable analysis. Springer-Verlag, Berlin/Heidelberg, 2000.