Faster FFTs in medium precision

Joris van der Hoevena, Grégoire Lecerfb

Laboratoire d'informatique, UMR 7161 CNRS

Campus de l'École polytechnique

1, rue Honoré d'Estienne d'Orves

Bâtiment Alan Turing, CS35003

91120 Palaiseau

a. Email: vdhoeven@lix.polytechnique.fr

b. Email: lecerf@lix.polytechnique.fr

November 4, 2023

In this paper, we present new algorithms for the computation of fast Fourier transforms over complex numbers for “medium” precisions, typically in the range from until bits. On the one hand, such precisions are usually not supported by hardware. On the other hand, asymptotically fast algorithms for multiple precision arithmetic do not pay off yet. The main idea behind our algorithms is to develop efficient vectorial multiple precision fixed point arithmetic, capable of exploiting SIMD instructions in modern processors.

Keywords: floating point arithmetic, quadruple precision, complexity bound, FFT, SIMD

A.C.M. subject classification:

G.1.0 Computer-arithmetic

A.M.S. subject classification: 65Y04, 65T50, 68W30

1.Introduction

Multiple precision arithmetic [4] is crucial in areas such as computer algebra and cryptography, and increasingly useful in mathematical physics and numerical analysis [2]. Early multiple precision libraries appeared in the seventies [3], and nowadays GMP [11] and MPFR [8] are typically very efficient for large precisions of more than, say, 1000 bits. However, for precisions which are only a few times larger than the machine precision, these libraries suffer from a large overhead. For instance, the MPFR library for arbitrary precision and IEEE-style standardized floating point arithmetic is typically about a factor 100 slower than double precision machine arithmetic.

This overhead of multiple precision libraries tends to further increase with the advent of wider SIMD (Single Instruction, Multiple Data) arithmetic in modern processors, such as the Intel AVX technology. Indeed, it is hard to take advantage of wide SIMD instructions when implementing basic arithmetic for integer sizes of only a few words. In order to fully exploit SIMD instructions, one should rather operate on vectors of integers of a few words. A second problem with current SIMD arithmetic is that CPU vendors tend to privilege wide floating point arithmetic over wide integer arithmetic, which would be most useful for speeding up multiple precision libraries.

In order to make multiple precision arithmetic more useful in areas such as numerical analysis, it is a major challenge to reduce the overhead of multiple precision arithmetic for small multiples of the machine precision, and to build libraries with direct SIMD arithmetic for multiple precision numbers.

One existing approach is based on the “TwoSum” and “TwoProduct” algorithms [7, 22], which allow for the exact computation of sums and products of two machine floating point numbers. The results of these operations are represented as sums where and have no “overlapping bits” (e.g. or ). The TwoProduct algorithm can be implemented using only two instructions when hardware features the fused-multiply-add (FMA) and fused-multiply-subtract (FMS) instructions, as is for instance the case for Intel's AVX2 enabled processors. The TwoSum algorithm could be done using only two instructions as well if we had similar fused-add-add and fused-add-subtract instructions. Unfortunately, this is not the case for current hardware.

It is classical that double machine precision arithmetic can be implemented reasonably efficiently in terms of the TwoSum and TwoProduct algorithms [7, 22, 23]. The approach has been further extended in [24] to higher precisions. Specific algorithms are also described in [21] for triple-double precision, and in [15] for quadruple-double precision. But these approaches tend to become inefficient for large precisions.

For certain applications, efficient fixed point arithmetic is actually sufficient. One good example is the fast discrete Fourier transform (FFT) [6], which is only numerically accurate if all input coefficients are of the same order of magnitude. This means that we may scale all input coefficients to a fixed exponent and use fixed point arithmetic for the actual transform. Moreover, FFTs (and especially multi-dimensional FFTs) naturally benefit from SIMD arithmetic.

In this paper, we will show how to efficiently implement FFTs using fixed point arithmetic for small multiples of the machine precision. In fact, the actual working precision will be of the form , where , for a small integer (typically, ), and denotes the number of fractional bits of the mantissa (also known as the trailing significant field, so that for IEEE double precision numbers).

Allowing for a small number of “nail” bits makes it easier to handle carries efficiently. On the downside, we sacrifice a few bits and we need an additional routine for normalizing our numbers. Fortunately, normalizations can often be delayed. For instance, every complex butterfly operation during an FFT requires only four normalizations (the real and imaginary parts of the two outputs).

Redundant representations with nail bits, also known as carry-save representations, are very classical in hardware design, but rarely used in software libraries. The term “nail bits” was coined by the GMP library [11], with a different focus on high precisions. However, the GMP code which uses this technology is experimental and disabled by default. Redundant representations have also found some use in modular arithmetic. For instance, they recently allowed to speed up modular FFTs [13].

Let denote the time spent to compute an FFT of length at a precision of bits. For an optimal implementation at precision , we would expect that . However, the naive implementation of a product at precision requires at least machine multiplications, so is a more realistic goal for small . The first contribution of this paper is a series of optimized routines for basic fixed point arithmetic for small . These routines are vectorial by nature and therefore well suited to current Intel AVX technology. The second contribution is an implementation inside the C++ librairies of Mathemagix [17] (which is currently limited to a single thread). For small , our timings seem to indicate that . For , we compared our implementation with FFTW3 [9] (in which case we only obtained when using the built-in type __float128 of GCC) and a home-made double-double implementation along the same lines as [23] (in which case we got ).

Although the main ingredients of this paper (fixed point arithmetic and nail bits) are not new, we think that we introduced a novel way to combine them and demonstrate their efficiency in conjunction with modern wide SIMD technology. There has been recent interest in efficient multiple precision FFTs for the accurate integration of partial differential equations from hydrodynamics [5]. Our new algorithms should be useful in this context, with special emphasis on the case when . We also expect that the ideas in this paper can be adapted for some other applications, such as efficient computations with truncated numeric Taylor series for medium working precisions.

Our paper is structured as follows. Section 2 contains a detailed presentation of fixed point arithmetic for the important precision . In Section 3, we show how to apply this arithmetic to the computation of FFTs, and we provide timings. In Section 4, we show how our algorithms generalize to higher precisions with , and present further timings for the cases when and . In our last section, we discuss possible speed-ups if certain SIMD instructions were hardware-supported, as well as some ideas on how to use asymptotically faster algorithms in this context.

Notations

Throughout this paper, we assume IEEE arithmetic with correct rounding and we denote by the set of machine floating point numbers. We let be the machine precision minus one (which corresponds to the number of fractional bits of the mantissa) and let and be the minimal and maximal exponents of machine floating point numbers. For IEEE double precision numbers, this means that , and .

The algorithms in this paper are designed to work with all possible rounding modes. Given and , we denote by the rounding of according to the chosen rounding mode. If is the exponent of and (i.e. in absence of overflow and underflow), then we notice that .

Modern processors usually support fused-multiply-add (FMA) and fused-multiply-subtract (FMS) instructions, both for scalar and SIMD vector operands. Throughout this paper, we assume that these instructions are indeed present, and we denote by and the roundings of and according to the chosen rounding mode.

2.Fixed point arithmetic for quasi doubled precision

Let . In this section, we are interested in developing efficient fixed point arithmetic for a bit precision . Allowing to be smaller than makes it possible to efficiently handle carries during intermediate computations. We denote by the number of extra “nail” bits.

2.1.Representation of fixed point numbers

We denote by the set of fixed point numbers which can be represented as

where are such that

(1)
(2)
(3)

It will be convenient to write for numbers of the above form. Since contains all numbers with and , this means that can be thought of as a fixed point type of precision .

Remark 1. Usually, we will even have and , but the extra flexibility provided by (2) and (3) will be useful during intermediate computations. In addition, for efficiency reasons, we do not require that .

Figure 1. Schematic representation of the decomposition .

2.2.Splitting numbers at a given exponent

An important subalgorithm for efficient fixed point arithmetic computes the truncation of a floating point number at a given exponent:

Algorithm Split()

return

Proposition 2. Given and such that , the algorithm Split computes a number with and .

Proof. Let . Since , we have

Since is obtained by rounding , it follows that

whence the exponent of is and , for any rounding mode. Since also has exponent , it follows that satisfies . Furthermore, and are both integer multiples of , whence so is . Finally, .

Remark 3. Assuming that the rounding mode is set towards zero, the condition suffices in Proposition 2. Indeed, this weaker condition implies , and therefore , with the rest of the proof unchanged. The same remark holds for Proposition 7 below and allows several of the subsequent bounds to be slightly improved. As a consequence, this sometimes allows us to decrease by one. However, in this paper, we prefer to avoid hypotheses on the rounding mode, for the sake of portability, efficiency, and simplicity. Nevertheless, our observation may be useful on some recent processors (such as Intel's AVX-512 enabled ones), which make it possible to force rounding modes at the level of individual instructions.

2.3.Normalization

Let . We will write for the subset of of all numbers with

Numbers in are said to be in normal form.

Remark 4. Both and are in normal form, and both and represent the number . Consequently, our fixed point representation is still redundant under this normalization (this is reminiscent from Avižienis' representations [1]).

Algorithm Normalize()

return

Proposition 5. Given with , the algorithm Normalize returns with .

Proof. Since , using Proposition 2 with , we have and . Using the fact that , this entails and therefore . Hence , so that . Finally, implies , so that .

2.4.Addition and subtraction

Non normalized addition and subtraction of fixed point numbers are straightforward:

Algorithm Add

return

Algorithm Subtract

return

Proposition 6. Let and with . If , then we have with . If , then we have with .

Proof. Since both and are multiple of , the addition is exact. Furthermore, the exponents of and are both strictly bounded by . Consequently, and . Finally . The statement for the subtraction follows by replacing by .

2.5.Multiplication

Our multiplication algorithm of fixed point numbers is based on a subalgorithm LongMul which computes the exact product of two numbers in the form of a sum , with the additional constraint that . Without this additional constraint (and in absence of overflow and underflow), and can be computed using the classical “Two Product” algorithm: , . Our LongMul algorithm exploits the FMA and FMS instructions in a similar way.

Algorithm LongMul

return

Algorithm Multiply

return

Proposition 7. Let and be such that . Then the algorithm computes a pair with , , and . In addition, if , then and .

Proof. In a similar way as in the proof of Proposition 2, one shows that and . It follows that and . If , then the subtraction is exact, whence .

Proposition 8. Let and with , and . Then we have and .

Proof. Let us write , and for the successive values of taken during the execution of the algorithm. Since , Proposition 7 implies that , , and . Using that and , we next obtain and . We also get that and . Finally we obtain:

2.6.C++ implementation inside Mathemagix

For our C++ implementation inside Mathemagix, we introduced the template type

template<typename C, typename V> fixed_quadruple;

The parameter C corresponds to a built-in numeric type such as double. The parameter V is a “traits” type (called the “variant” in Mathemagix) and specifies the precision (see [18] for details). When instantiating for C=double and the default variant V, the type fixed_quadruple<double> corresponds to with (see the file numerix/fixed_quadruple.hpp).

Since all algorithms from this section only use basic arithmetic instructions (add, subtract, multiply, FMA, FMS) and no branching, they admit straightforward SIMD analogues. Mathemagix features a very systematic support for SIMD types and operations [18]. This provides us with SIMD versions for multiple precision fixed point arithmetic simply by instantiating the above template types for a suitable numeric SIMD class, such as avx_double from numerix/avx.hpp.

3.Fast Fourier transforms

In this section we describe how to use the fixed point arithmetic functions to compute FFTs. The number of nail bits is adjusted to perform a single normalization stage per butterfly. In the next paragraphs we follow the classical Cooley and Tukey in place algorithm [6].

3.1.Complex arithmetic

We implement complex analogues ComplexNormalize, ComplexAdd, ComplexSubtract and ComplexMultiply of Normalize, Add, Subtract and Multiply in a naive way. We have fully specified ComplexMultiply below, as an example. The three other routines proceed componentwise, by applying the real counterparts on the real and imaginary parts. Here and represent the real and imaginary parts of respectively. The norm of the complex number is defined as .

Algorithm ComplexMultiply()

return

3.2.Butterflies

The basic building block for fast discrete Fourier transforms is the complex butterfly operation. Given a pair and a precomputed root of unity , the butterfly operation computes a new pair . For inverse transforms, one rather computes the pair instead. For simplicity, and without loss of generality, we may assume that the approximation of in satisfies .

Algorithm DirectButterfly

return

Algorithm InverseButterfly

return

Proposition 9. Let with and assume that . Then and .

Proof. Let be as in ComplexMultiply with and as arguments. From Proposition 8, it follows that and , and we have similar bounds for and . From Proposition 6, we next get and . Applying Proposition 6 again, we obtain , and . The conclusion now follows from Proposition 5.

Proposition 10. Let with and assume that . Then and .

Proof. Proposition 6 yields and , and . Proposition 5 then gives us , . Let be as in ComplexMultiply with and as arguments. From Proposition 8, it follows that , , and we have similar bounds for and . From Proposition 6 again, we get and . Finally Proposition 5 gives us , .

For reason of space, we will not go into more details concerning the total precision loss in the FFT, which is of the order of bits at size . We refer the reader to the analysis in [14], which can be adapted to the present context thanks to the above propositions.

3.3.Timings

Throughout this article, timings are measured on a platform equipped with an Intel CoreTM i7-4770 CPU @ 3.40 GHz and 8 GB of 1600 MHz DDR3. It runs the Jessie GNU Debian operating system with a Linux kernel version 3.14 in 64 bit mode. We measured average timings, while taking care to avoid CPU throttling issues. We compile with GCC [10] version 4.9.1.

8 9 10 11 12 13 14 15 16
double 0.43 0.94 2.3 5.4 15 34 85 190 400
long double 6.1 14 31 70 153 332 720 1600 3400
__float128 89 205 463 1000 2300 5100 11000 24000 51000

Table 1. FFTW 3.3.4 in size , in micro-seconds.

For the sake of comparison, Table 1 displays timings to compute FFTs over complex numbers with the FFTW library version 3.3.4 [9]. Timings are obtained via the command test/bench bundled with the library. The row double corresponds to the standard double precision in C and the configuration options –enable-avx and –enable-fma. The row long double is obtained in the same way with the configuration options –enable-long-double; on current platform this corresponds to using the x87 instructions on 80 bits wide floating point numbers. The last row means using the IEEE compliant quadruple precision type __float128 provided by the compiler, and configured with –enable-fma and –enable-quad-precision.

8 9 10 11 12 13 14 15 16
double 0.54 1.1 2.5 6.8 16 37 85 220 450
long double 9.5 21 48 110 230 500 1100 2300 5000
__float128 94 220 490 1100 2400 5300 11000 25000 53000
quadruple 5.0 11 25 55 120 260 570 1200 2600
fixed_quadruple 2.8 6.4 15 33 71 160 380 820 1700
MPFR (113 bits) 270 610 1400 3100 6800 15000 33000 81000 230000

Table 2. Mathemagix in size , in micro-seconds.

Our Mathemagix implementations of the algorithm in this paper are available from revision . Timings are reported in Table 2. We have implemented the split-radix algorithm [19]. The configuration process of Mathemagix automatically detects and activates AVX2 and FMA instruction sets. Roots of unity and necessary twiddle factors are precomputed once with full precision by using MPFR version 3.1.2 based on GMP version 6.0.0.

The three first rows concern the same built-in numerical types as in the previous table. The row quadruple makes use of our own non IEEE compliant implementation of the algorithms of [23], sometimes called double-double arithmetic (see file numerix/quadruple.hpp). The row fixed_quadruple corresponds to the new algorithms of this section with nail bits. In the rows double, quadruple, and fixed_quadruple, the computations fully benefit from AVX and FMA instructions. Finally the row MPFR contains timings obtained when using MPFR floating point numbers with bit precision set to 113. At this precision, our implementation involves an overhead due to the fact that the MPFR type mpfr_t is wrapped into a C++ class with reference counters (see file numerix/floating.hpp).

Let us mention that our type quadruple requires arithmetic operations (counting FMA and FMS for ) for a product, and for an addition or subtraction. A direct butterfly thus amounts to elementary operations. On the other hand, our type fixed_quadruple needs only 48 such operations. It is therefore satisfying to observe that these estimates are reflected in practice for small sizes, where the cost of memory caching is negligible.

4.Generalizations to higher precisions

The representation with nail bits and the algorithms designed so far for doubled precision can be extended to higher precisions, as explained in the next paragraphs.

4.1.Representation and normalization

Given and an integer , we denote by the set of numbers of the form

where are such that

We write for numbers of the above form and abbreviate . Numbers in are said to be in normal form. Of course, we require that is smaller than . For this, we assume that . The normalization procedure generalizes as follows (of course, the loop being unrolled in practice):

Algorithm Normalize()

for from down to do

return

Proposition 11. Given a fixed point number with , the algorithm Normalize returns with .

Proof. During the first step of the loop, when , by Proposition 2 we have and . Using the fact that , this entails and therefore . Hence , so that . Using similar arguments for , we obtain and . Finally, since we have , which concludes the proof.

4.2.Ring operations

The generalizations of Add and Subtract are straightforward: , and .

Proposition 12. Let and with . If , then with . If , then with .

Proof. Since both and are integer multiples of for , the additions are exact. Furthermore, the exponents of and are both strictly bounded by . Consequently, and . The statement for the subtraction follows by replacing by .

Multiplication can be generalized as follows (all loops again being unrolled in practice):

Algorithm Multiply()

for from to do

for from to do

for from to do

return

Proposition 13. Let and with , , , and . Then with .

Proof. Let us write for the pair returned by , for . Since , Proposition 7 implies that , , and . At the end of the algorithm we have and for all , and therefore for all . In particular no overflow occurs and these sums are all exact. Let represent the value of before entering the second loop. Then let for , so that holds at the end of the algorithm. We have , and for all . For the precision loss, from , we deduce that

The proof follows from , while using that and .

4.3.Fast Fourier Transforms

We can use the same butterfly implementations as in Section 3.2. The following generalization of Proposition 9 shows that we may take as long as , and as long as for the direct butterfly operation.

Proposition 14. Let with and assume that and . Then and .

Proof. Let be as in ComplexMultiply with and as arguments. From Proposition 13, it follows that and , and we have similar bounds for and . From Proposition 12, we next get and . Applying Proposition 12 again, we obtain , and . The conclusion now follows from Proposition 11.

The following generalization of Proposition 9 shows that we may take as long as , as long as , and as long as for the inverse butterfly operation.

Proposition 15. Let with and assume that and . Then and .

Proof. Proposition 12 yields and , and . Proposition 11 then gives us , . Let be as in ComplexMultiply with and as arguments. From Proposition 13, it follows that , , and we have similar bounds for and . From Proposition 12 again, we get and . Finally Proposition 11 gives us , .

Remark 16. Within Mathemagix, inverse butterflies are systematically used in all inverse FFT implementations, so as to benefit from in-place algorithms without bit reverse copies. Nevertheless, if the precision becomes of critical importance, then we recommend to use the direct FFT transform with replaced by . In our code for and , we have decided to keep by performing one additional normalization of in .

Remark 17. The following strategy sometimes makes it possible to decrease by one: instead of normalizing the entries of an FFT to be of norm , we rather normalize them to be of norm for some small constant . This should allow us to turn the conditions and in the Propositions 14 and 10 into and . One could also investigate what happens if we additionally assume .

It is interesting to compute the number of operations in which are needed for our various fixed point operations, the allowed operations in being addition, subtraction, multiplication, FMA and FMS. For larger precisions, it may also be worth it to use Karatsuba's method [20] for the complex multiplication, meaning that we compute using the formula . This saves one multiplication at the expense of three additions/subtractions and an additional increase of . The various operation counts are summarized in Table 3.

1 2 3 4 5 6
Normalize 0 4 8 12 16 20
Add/subtract 1 2 3 4 5 6
Multiply 1 5 15 30 50 75
Butterfly 8 48 110 192 294 416
Butterfly-Karatsuba 10 49 104 174 259 359

Table 3. Operation counts in terms of basic arithmetic operations in .

4.4.Timings

For our C++ implementation inside Mathemagix we introduced two additional template types

template<typename C, typename V> fixed_hexuple;

template<typename C, typename V> fixed_octuple;

with similar semantics as fixed_quadruple. When instantiating for C=double and the default variant V, these types correspond to and with . In the future, we intend to implement a more general template type which also takes as a parameter.

Table 4 shows our timings for FFTs over the above types. Although the butterflies of the split-radix algorithm are a bit different from the classical ones, it is pleasant to observe that our timings roughly reflect operation counts of Table 3. For comparison, let us mention that the time necessary to perform (essentially) the same computations with MPFR are roughly the same as in Table 2, even in octuple precision. In addition (c.f. Table 2), we observe that the performance of our fixed_hexuple FFT is of the same order as FFTW's long double implementation.

8 9 10 11 12 13 14 15 16
double 0.54 1.1 2.5 6.8 16 37 85 220 450
fixed_quadruple 2.8 6.4 15 33 71 160 380 820 1700
fixed_hexuple 7.6 17 38 84 180 400 870 1800 3900
fixed_octuple 18 42 93 200 450 980 2100 4500 9600

Table 4. Mathemagix in size , in micro-seconds.

Thanks to the genericity of our C++ implementation, it is possible to directly compute FFTs on a SIMD type such as fixed_hexuple<avx_double>, which corresponds to packing four elements of type fixed_hexuple<double> into an AVX data type. Combined to classical cache-friendly approaches, this allows to compute multi-dimensional FFTs in an efficient way, which closer reflects operation counts of Table 3. When computing an FFT, say over fixed_hexuple<double>, the input data is in fact packed in a way that most of the computation reduces to one FFT over fixed_hexuple<avx_double> of size divided by four. Nevertheless, some of the C++ code needs to be fine-tuned in order to fully benefit from SIMD instructions, which turns out to constitute an important part of the implementation work.

5.Variants and perspectives

Polynomial representations
In this paper, we have chosen to represent multi-precision fixed point numbers as sums . Alternatively, we may regard such numbers as polynomials

in the “indeterminate” , with and . It can be checked that the algorithms of this paper can be adapted to this setting with no penalty (except for multiplication which may require one additional instruction). Roughly speaking, every time that we need to add a “high” part and a “low” part of two long products, it suffices to use a fused-multiply-add instruction instead of a simple addition .

For naive multiplication, the polynomial representation may become slightly more efficient for , when the hardware supports an SIMD instruction for rounding floating point numbers to the nearest integer. In addition, when becomes somewhat larger, say , then the polynomial representation makes it possible to use pair-odd Karatsuba multiplication [12, 16] without any need for special carry handling (except that has to be chosen a bit larger). Finally, the restriction is no longer necessary, so the approach can in principle be extended to much larger values of .

Integer arithmetic
The current focus of vendors of wide SIMD arithmetic is on fast floating point arithmetic, whereas integer arithmetic is left somewhat behind. For instance, Intel's AVX-512 technology features efficient arithmetic (including FMA) on vectors of eight double precision numbers. On the other hand, SIMD integer multiplications are limited to 16 bit and 32 bit integers.

In order to develop an efficient GMP-style library for SIMD multiple precision integers, it would be useful to have an instruction for multiplying two vectors of 64 bit integers and obtain the lower and higher parts of the result in two other vectors of 64 bit integers. For multiple precision additions and subtractions, it is also important to have vector variants of the “add with carry” and “subtract with borrow” instructions.

Assuming the above instructions, a bit unsigned truncated integer multiplication can be done in SIMD fashion using approximately instructions. Furthermore, in this setting, there is no need for normalizations. However, signed multiplications are somewhat harder to achieve. The best method for performing a signed integer multiplication with is probably to perform the unsigned multiplication with and use the fact that . We expect that the biggest gain of using integer arithmetic comes from the fact that we achieve a instead of a bit precision (for small , we typically have ).

One potential problem with GMP-style SIMD arithmetic is that asymptotically faster algorithms, such as Karatsuba multiplication (and especially the even-odd variant), are harder to implement. Indeed, we are not allowed to ressort to branching for carry handling. One remedy would be to use expansions with coefficients for a suitable number of nail bits and a suitable “multiplexer” .

Faster splitting
An interesting question is whether hardware support for certain specific operations might speed up the fixed point arithmetic in this paper. One operation which is relatively expensive is normalization. It would be nice to have an instruction which takes and on input and which computes numbers with , and . In that case, normalization would only require two instructions instead of four. Besides speeding up the naive algorithms, it would also become less expensive to do more frequent normalizations, which makes the use of asymptotically more efficient multiplication algorithms more tractable at lower precisions and with fewer nail bits. More efficient normalization is also important for efficient shifting of mantissas, which is the main additional ingredient for the implementation of multiple precision arithmetic.

Asymptotically efficient algorithms
A theoretically important question concerns the asymptotic complexity of FFTs at large precisions. Let denote the bit complexity of bit integer multiplication. It has recently been proved [14] that , where . Under mild assumptions on and it can also be shown [14] that an FFT of size and bit precision can be performed in time . For and large , this means that scales linearly with . In a future paper, we intend to investigate the best possible (and practically achievable) constant factor involved in this bound.

Bibliography

[1]

A. Avizienis. Signed-digit number representations for fast parallel arithmetic. IRE Transactions on Electronic Computers, EC-10(3):389–400, 1961.

[2]

D. H. Bailey, R. Barrio, and J. M. Borwein. High precision computation: Mathematical physics and dynamics. Appl. Math. Comput., 218:10106–10121, 2012.

[3]

R. P. Brent. A Fortran multiple-precision arithmetic package. ACM Trans. Math. Software, 4:57–70, 1978.

[4]

R. P. Brent and P. Zimmermann. Modern Computer Arithmetic. Cambridge University Press, 2010.

[5]

S. Chakraborty, U. Frisch, W. Pauls, and Samriddhi S. Ray. Nelkin scaling for the Burgers equation and the role of high-precision calculations. Phys. Rev. E, 85:015301, 2012.

[6]

J. W. Cooley and J. W. Tukey. An algorithm for the machine calculation of complex Fourier series. Math. Comp., 19:297–301, 1965.

[7]

T. J. Dekker. A floating-point technique for extending the available precision. Numer. Math., 18(3):224–242, 1971.

[8]

L. Fousse, G. Hanrot, V. Lefèvre, P. Pélissier, and P. Zimmermann. MPFR: A multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Software, 33(2), 2007. Software available at http://www.mpfr.org.

[9]

M. Frigo and S. G. Johnson. The design and implementation of FFTW3. Proc. IEEE, 93(2):216–231, 2005.

[10]

GCC, the GNU Compiler Collection. Software available at http://gcc.gnu.org, from 1987.

[11]

T. Granlund et al. GMP, the GNU multiple precision arithmetic library. http://www.swox.com/gmp, from 1991.

[12]

G. Hanrot and P. Zimmermann. A long note on Mulders' short product. J. Symbolic Comput., 37(3):391–401, 2004.

[13]

D. Harvey. Faster arithmetic for number-theoretic transforms. J. Symbolic Comput., 60:113–119, 2014.

[14]

D. Harvey, J. van der Hoeven, and G. Lecerf. Even faster integer multiplication, 2014. http://arxiv.org/abs/1407.3360.

[15]

Yozo Hida, Xiaoye S. Li, and D. H. Bailey. Algorithms for quad-double precision floating point arithmetic. In Proc. 15th IEEE Symposium on Computer Arithmetic, pages 155–162. IEEE, 2001.

[16]

J. van der Hoeven and G. Lecerf. On the complexity of multivariate blockwise polynomial multiplication. In J. van der Hoeven and M. van Hoeij, editors, Proc. 2014 International Symposium on Symbolic and Algebraic Computation, pages 211–218. ACM, 2012.

[17]

J. van der Hoeven, G. Lecerf, B. Mourrain, et al. Mathemagix, from 2002. http://www.mathemagix.org.

[18]

J. van der Hoeven, G. Lecerf, and G. Quintin. Modular SIMD arithmetic in Mathemagix, 2014. http://hal.archives-ouvertes.fr/hal-01022383.

[19]

S. G. Johnson and M. Frigo. A modified split-radix FFT with fewer arithmetic operations. IEEE Trans. Signal Process., 55(1):111–119, 2007.

[20]

A. Karatsuba and J. Ofman. Multiplication of multidigit numbers on automata. Soviet Physics Doklady, 7:595–596, 1963.

[21]

C. Lauter. Basic building blocks for a triple-double intermediate format. Technical Report RR2005-38, LIP, ENS Lyon, 2005.

[22]

J.-M. Muller, N. Brisebarre, F. de Dinechin, C.-P. Jeannerod, V. Lefèvre, G. Melquiond, N. Revol, D. Stehlé, and S. Torres. Handbook of Floating-Point Arithmetic. Birkhäuser Boston, 2010.

[23]

T. Nagai, H. Yoshida, H. Kuroda, and Y. Kanada. Fast quadruple precision arithmetic library on parallel computer SR11000/J2. In Computational Science - ICCS 2008, 8th International Conference, Kraków, Poland, June 23-25, 2008, Proceedings, Part I, pages 446–455, 2008.

[24]

D. M. Priest. Algorithms for arbitrary precision floating point arithmetic. In Proc. 10th Symposium on Computer Arithmetic, pages 132–145. IEEE, 1991.