> <\body> <\hide-preamble> \; <\padded> <\indent-both|1tab|0tab> ||> > ] >>> <\with|bibitem-width|>.|w.>|3.1em>|item-hsep||bibitem-nr|0|par-flexibility|2.0> <\description> > ||<\author-affiliation> CNRS, École polytechnique, Institut Polytechnique de Paris Laboratoire d'informatique de l'École polytechnique (LIX, UMR 7161) Bâtiment Alan Turing, CS35003 1, rue Honoré d'Estienne d'Orves 91120 Palaiseau, France |>>> In this paper, we survey various basic and higher level tasks in computer algebra from the complexity perspective. Particular attention is paid to problems that are fundamental from this point of view and interconnections between other problems. \; > Which problems in symbolic computation can be solved reasonably fast? Given a problem, would it be possible to predict how much time is needed to solve it? The area of was designed to provide answers to such questions. However, symbolic computation has a few particularities that call for extra or separate techniques. One particularity is that the most prominent algorithms typically get implemented, so we can check theory against practice. A satisfactory complexity theory in our area should at least be able to or running times that are observed inpractice. However, such a theory should be more than an ennobled tool for benchmarking. We also wish to gain insight into the complexity of problems, understand how the complexities of different problems are, and identify the most central computational tasks in a given area (these tasks typically require highly optimized implementations). Traditionally, a distinction is made between the (how to analyze the complexity of a given algorithm) and (fitting problems into complexity classes and proving lower bounds). This paper partially falls in the second category, since we will be interested in of algorithms and problems. Such a broader perspective is particularly useful if you want to implement anew computer algebra system, like in my case, or a library for such a system. However, traditional complexity theory is not very adequate for computer algebra, due to the disproportionate focus on lower bounds and coarse complexity classes. Fortunately, as will be argued in Section, we already have many ingredients for developing a more meaningful complexity theory for our area. I found it non-trivial to organize these ingredients into a \Pgrand theory\Q. In this survey, I will therefore focus on concrete examples that reflect the general spirit. The first series of examples in Section concerns basic operations like multiplication, division, gcds, for various algebras. We will see how various basic complexities are related and how some of them can be derived using general techniques. In Section, we will turn to the higher level problem of polynomial system solving. My interest in this problem is more recent and driven by complexity considerations. I will survey advantages and limitations of the main existing approaches and highlight afew recent results. Standard theoretical computer science courses on computability and complexity theory promise insight into two questions: <\description> What can we compute? What can we compute efficiently? Whereas there is broad consensus about the definition of computability (Church' thesis), it is harder to give aprecise definition of \Pefficiency\Q. For this, traditional complexity theory heavily relies on (such as , , etc.) in order to describe the difficulty of problems. In principle, this provides some flexibility, although many courses single out as \Pthe\Q class of problems that can be solved \Pefficiently\Q. Now the question is actually too hard for virtually all interesting problems, since it involves proving notoriously difficult lower bounds. Our introductory course goes on with the definition of to relate the difficulties of different problems: if problem reduces in polynomial time to an NP-complete problem, then is NP-complete as well. This allows us to use NP-completeness as asurrogate for \Plower bound\Q. A big part of complexity theory is then devoted to proving such hardness results: the emphasis is on what be computed fast; understanding which problems can be solved efficiently (and and ) is less of a concern (this is typically delegated to other areas, such as ). There are numerous reasons why this state of the art is unsatisfactory for symbolic computation. At best, negative hardness results may be useful in the sense that they may prevent us from losing our time on searching for efficient solutions. Although: <\quotation> (RichardJenks)> For instance, deciding whether a system of polynomial equations over > has solution is a typical NP-complete problem, since it is equivalent to SAT. At the same time, the computer algebra community has a long and successful history of solving polynomial systems, at least for . From a theoretical point of view, it was also proved recently that solutions of ageneric dense system of polynomial equations over > can be computed in expected quasi-optimal time in terms of the expected output size; see and Section below. In the past decade, there has been a lot of work to understand such paradoxes and counter some of the underlying criticism. For instance, the specific nature of certain input instances is better taken into account via, , parametric or instance complexity. Complexity classes with more practical relevance than have also received wider attention. But even such upgraded complexity theories continue to miss important points. First of all, lower bound proofs are rarely good for applications: for the above polynomial time reduction of problem to the NP-complete problem , polynomial time reduction will do. If we are lucky to hit an easy instance of , such a reduction typically produces an intractable instance of . In other words: negative reductions do not need to be efficient. Secondly, the desire to capture complexity via complexity classes is unrealistic: even a constant time algorithm (like any algorithm on a finite computer which terminates) is not necessarily \Ppractical\Q if the constant gets large; constants are essential in practice, but hidden through asymptotics. A more subtle blind spot concerns the framework of complexity classes itself, which has modeled the way how several generations have been thinking about computational complexity. Is it really most appropriate to put problems into pigeonholes , , ,,? Sorting and matrix multiplication are both in ; does this mean that these problems are related? In computer algebra we have efficient ways to reduce division and gcds to multiplication; the complexities of interpolation, multi-point evaluation, factorization, and root finding are also tightly related (see Section); a good complexity theory should reflect such. Implementors tend to have a more pragmatic point of view on complexity: <\quotation> (Mark van Hoeij)> It is true that a complexity-oriented approach to the development of algorithms (as advocated by the author of these lines) may be overkill. For instance, there has been a tendency in computer algebra to redevelop alot of linear algebra algorithms to obtain complexities that are good in terms of the exponent \2.3729> of matrix multiplication. But > stays desperately close to, in practice. Yet, there are obvious pitfalls when throwing all complexity considerations overboard: your stopwatch will not predict whether you should take a cup of coffee while your algorithm runs, or whether you should rather go to bed. What one often needs is a rough educated idea about expected running times. Such a relaxed attitude towards complexity is omnipresent in neighboring areas such as numerical analysis. The popular textbook is full with assertions like \Pthe LU-decomposition [] requires about *N> executions []\Q (end of Section2.3.1), \Pthe total operation count for both eigenvalues and eigenvectors is 25*n>\Q (Section11.7), etc. Another obvious problem with benchmarks is that experiments on computers from fifty years ago may be hard to reproduce today; and they probably lost a lot of their significance with time. But there are also less obvious and more serious problems when your stopwatch is all you got. Ihit one such problem during the development of relaxed power series arithmetic. Polya showed that the generating series of the number of stereoisomers with molecular formula \\\\\\> satisfies the equation <\eqnarray*> >||*z*+s|)>|)>.>>>> Many terms of this series can be computed using relaxed arithmetic. However, when doing so for rational coefficients, timings were much slower than expected. The reason was that GMP used a naive algorithm for large gcd computations at that time. This came as a big surprise to me: if I had just tried the new algorithms with a stopwatch in my hand, then I probably would have concluded that they were no good! This was really a serious problem. Around the same period, at every computer algebra meeting, there would be an afternoon on fast linear algebra without divisions (variants of Bareiss' algorithm). This research direction became pointless when GMP eventually integrated a better algorithm to compute gcds. When implementing your own algorithm in amodern computer algebra system, you typically rely on of other algorithms, some of which may even be secret in proprietary systems. You also rely on very complex hardware with multi-core SIMD processors and many levels of cache memory. One of the aims of as opposed to the analysis and benchmarking of a single should be to get a grip on the broad picture. A better global understanding might guide future developments. For instance, working in algebraic extensions has bad press in computer algebra for being slow; \Prational\Q algorithms have been developed to counter this problem. But is this really warranted from a complexity point of view? Could it be that algebraic extensions simply are not implemented that well? In symbolic computation, it is also natural to privilege symbolic and algebraic methods over numeric or analytic ones, both because many of us simply like them better because we have more efficient support for them in existing systems. But maybe some algebraic problems could be solved more efficiently using numerical methods. What about relying more heavily on homotopy methods for polynomial system solving? What about numerical differential Galois theory? How does the ISSAC community deal with complexity issues? In Table, I compiled some statistics for actual papers in the proceedings, while limiting myself to the years 1997, 2004, 2012, 2017, and 2021. I checked whether the papers contained benchmarks, acomplexity analysis, both, or nothing at all.||||||>||||||>||||||>||||||>||||||>>>>> Statistics for the number of papers in ISSAC proceedings that contain benchmarks, a complexity analysis, nothing of this kind, or both. > It is clear from Table that the ISSAC community cares about complexity issues. Many papers contain adetailed complexity analysis. For high-level algorithms with irregular complexities, authors often provide benchmarks instead. Besides providing interesting insights, one may also note that \Pobjective\Q benchmarks and complexity results are frequently used to promote anew algorithm, especially when there are several existing methods to solve the same problem. Only a few papers mention traditional complexity classes and lower bounds are rarely examined through the lens of computational hardness. Instead, it is typically more relevant to determine the cost and the size of the output for generic instances. For instance, instead of proving that \Ppolynomial system solving is hard\Q, one would investigate the complexity of solving a generic system in terms of the Bézout bound (which is reached for such a system). Most theoretical complexity results are stated in the form of upper bounds. Both algebraic and bit-complexity models are very popular; occasionally, bounds are stated differently, in terms of heights, condition numbers, probability of success, etc. One interesting characteristic of algorithms in symbolic computation is that their complexities can often be expressed in terms of the basic complexities of multiplying -bit integers, degree polynomials, and r>> matrices. Several papers even use special notations >, >, and >> for these complexities. In asimilar manner, number theorists introduced <\equation*> L,c|]>\\|)>*>*>> in order to express the complexities of fast algorithms for integer factorization and discrete logarithms. We will return to these observations in Section below. In the past decade, there have been many developments \Pbeyond the worst-case\Q analysis of algorithms. To which extent are these developments relevant to the ISSAC community? For instance, a hard problem may become easier for particular input that are in practice. It is important to describe and understand such distributions more precisely. Surprisingly, very few papers in our area pursue this point of view. On the other hand, there are many lists of \Pinteresting\Q problems for benchmarking. Conversely, only the last page 660 of contains a table with experimental timings. In my personal research, I also encountered various interesting algorithms that scream for yet other non-traditional analyses: amortized complexity (fast algorithms modulo precomputations), conjectured complexity (dependence on plausible but unproven conjectures), heuristic complexity, irregular complexity (various operations over finite fields >>> can be done faster if>is composite or smooth). Hardy already noted that many natural orders of growth can be expressed using exp-log functions; this in particular holds for the asymptotic complexities of many algorithms. Instead of searching for \Pabsolute\Q complexity bounds, it has become common to express complexities in terms of other, more \Pfundamental\Q complexities such as >, >, >>, and ,c|]>> that were introduced above. In a similar way as traditional complexity theory is organized through complexity classes , , ,, etc., I think that fundamental complexities can play asimilar role for symbolic computation and beyond. Whereas complexity classes give a rough idea about how fast certain problems can be solved, fundamental complexities provide more detailed information such as \Pthe problem essentially reduces to linear algebra\Q. In fact, the language of complexity bounds (especially when enriched with notations such as >, >, and >>) is more expressive and precise than the language of complexity classes: whereas <\equation*> \ vaguely states that primality can be checked \Pfast\Q, <\eqnarray*> >||*log d|)>>>>> states that computing gcds essentially reduces to multiplication, plausibly using a dichotomic algorithm. Another advantage of (re-)organizing complexity theory in this direction is that fundamental complexities are : the bound() holds on Turing machines, for algebraic complexity models, on quantum computers, as well as on many parallel machines. Deciding whether integer factorization can be done in polynomial time may lead to different outcomes on Turing machines and quantum computers. With the help of fundamental complexities, even constant factors can be made more precise. For instance, 11.6> indicates how to prove that <\equation*> \+o|)>**log d. This often makes it possible to predict running times very accurately, provided we know the performance of a few basic algorithms. Implementors can then optimize these basic algorithms. Fundamental algorithms may also be implemented in hardware. For instance, Apple's new M1 processor contains a matrix coprocessor. With suitable hardware, it might very well be possible to \Pachieve\Q =2> for many real world applications. The question remains which complexities are worthy to be dubbed \Pfundamental\Q. This question is actually a productive common thread whenever you want to understand complexity issues in some area. It asks you to identify the most fundamental algorithmic building blocks. Sometimes, this requires the introduction of a new fundamental complexity (such as modular composition or multi-point evaluation for polynomial system solving; see Section below). Sometimes, you may find out that your area reduces to another one from an algorithmic point of view. For instance, computations with the most common Ore operators essentially reduce to polynomial linear algebra. Asymptotic complexity bounds can be treacherous from a practical point of view. The most notorious example is matrix multiplication: it has been proved that \2.3729>, but I am not aware of any implementation that performs better than Strassen's algorithm for which =log 7\2.807>. A more interesting example is Kedlaya\UUmans' recent algorithm for modular composition. We were unable to make it work fast in practice, but it is unclear whether variants of the algorithm might allow for efficient implementations. Whereas hardness results dissuade implementors from overly efficient solutions for certain problems, the main practical interest of the above complexity bounds is to dissuade complexity theorists from trying to prove overly pessimistic lower bounds. One useful indicator for practical differences in performance is the : if is asymptotically faster than , for which size > of the input does actually become faster? Curiously, crossover points are often mentioned when discussing benchmarks, but they have not really been the subject of systematic studies: does anyone know what is the crossover point between naive matrix multiplication and any of the asymptotically fast algorithms with \2.5>? It is illuminating to study the complexities of and the crossover point. For instance, for Strassen's algorithm versus naive matrix multiplication, assume that the crossover point is \1000>. Then this means that we need a matrix of size 32000> in order to gain a factor two with Strassen's method: for =log 2>, we have |)>/|)>>=2|)>>\1.95>. Now there are many recent bounds in the recent computer algebra literature of the form =+b>|)>>. Here the soft-O notation > stands for >|)>> and serves to \Pdiscard\Q all logarithmic factors. Although I think that it is a good practice to state complexity bounds in terms of > (for instance, > might be lower for sparse or structured matrices), Isuspect that the discarded logarithmic factors are often far more significant. In this paper, I focus on sequential bit-complexity and algebraic complexity. Of course, modern computers come with complex cache hierarchies and they are highly parallel (distributed over networks, multi-core, and with wide SIMD instructions). A model for measuring the cost of cache misses was proposed in. For a past ISSAC tutorial on parallel programming techniques in computer algebra, I refer to. The Spiral project is also noteworthy for using computer algebra to generate parallel code for low level transforms such as FFTs. Computer algebra systems are usually developed in layers, as are mathematical theories. The bottom layer concerns numbers. The next layers contain basic data structures such as univariate polynomials, dense matrices, symbolic expressions, and so on. Gröbner bases, symbolic integration, can be found in the top layers. My mental picture of a complexity theory for computer algebra roughly follows the same structure. The aim is to understand the complexity issues layer by layer, starting with the lowest ones. In order to figure out the cost of basic arithmetic operations, multiplication is the central operation to analyze first. We have already encountered the fundamental complexities >, >, and >>, for which the best known bounds are summarized in Table.||||>|>>|>>|, bit complexity>>|>>|>>|, algebraic>>||> d>|)>>>|, \0> >>||>>|, \0>, conjectured>>|r>>>|>|)>,\\2.3729>>|, algebraic>>>>>> Best known complexity bounds for multiplying integers, polynomials, and matrices. > Multiplication is fundamental in two ways. On the one hand, many other operations can efficiently be reduced to multiplication. On the other hand, the techniques that allow for fast multiplication are typically important for other operations as well. In Table, Isummarized the complexity of multiplication for a few other important algebras.||||>|>>|*\|)>>>>|, relaxed>>||]>>>|*\|)>>>>|, relaxed>>|r>>>|*+r>*d|)>>>|>>||]>>>|*log d+r-1>*d|)>>>|, r=deg>>>>||*log r+d-1>*r|)>>>|, d>>>|\\|>>>|*\|)>>>>|, \\|]>>>>||]>>>||)>>>|, sparse, heuristic>>>>>> Best known complexity bounds for multiplication in various algebras. In the last bound, stands for the total size of the supports of the input and output. > Other noteworthy bounds in the same spirit concern truncated multiplication, middle products, multivariate power series, symmetric polynomials, rectangular matrix multiplication>, and sparse differential operators. Technically speaking, many algorithms for fast multiplication rely on (or, more generally, they rewrite the problem into a simpler multiplication problem for another algebra). The complexity of such algorithms can be expressed in terms of the number of evaluation points and the cost of conversions. For instance, given an for degree polynomials, let > and > denote the number of evaluation points and the cost of evaluation/interpolation. Then the complexity of multiplying two matrices of degree in r>> becomes <\equation*> *r>+3**r. The evaluation-interpolation scheme should carefully be selected as a function of and , with a trade-off between the efficiency of the conversions and the compactness of the evaluated polynomials. There exist numerous important schemes: fast Fourier transforms, Karatsuba and Toom-Cook transforms, Chinese remaindering, Nussbaumer (or Schönhage\UStrassen) polynomial transforms, truncated Fourier transforms, Chudnovsky evaluations and interpolations, and so on. For other basic arithmetic operations like division, square roots, gcds, matrix decompositions, exp, log, , one may proceed in two stages: we first seek an efficient reduction to multiplication and then further optimize the method for particular multiplication schemes. For instance, operations such as division and square root, but also exponentials of formal power series, can be reduced efficiently to multiplication using Newton's method. When using such an algorithm in combination with FFT-multiplication, some of the FFTs are typically redundant. Constant factors can then be reduced by removing such redundancies. Note that Shoup's NTL library is an early example of a library that makes systematic use of specific optimizations for the FFT-model. One may go one step further and modify the algorithms such that even more FFTs are saved, at the cost of making the computations in the FFT-model a bit more expensive. This technique of leads to even better asymptotic constant factors or complexities. Table contains a summary of the best known asymptotic complexity bounds for various basic operations on integers and/or floating point numbers. Similar bounds hold for polynomials and truncated power series. The first three bounds were actually proved for polynomials and/or series, while assuming the FFT-model: one degree multiplication is equivalent to six DFTs of length .||>||2*>>|>>||*>>|>>||*>>|>>||>|>>>>>|*|)>>>|>>||*log n|)>>>|>>||*log n|)>>>|>>||>|>>>>>|*log n|)>>>|>>||>|>>>>>|*log n|)>>>|>>>>>> Complexity bounds for various basic operations on -bit integers and/or floating point numbers. The first four bounds assume the FFT-model. >\ CRT stands for the Chinese remainder theorem; conversions in either direction have the same softly linear complexity. This was originally proved for polynomials, but also holds for integer moduli2.7>. For CRTs, the moduli are fixed, so quantities that only depend on the moduli can be precomputed. A is a function that satisfies a linear differential equation over |\>>, where |\>> is the field of algebraic numbers. Many special functions are of this type. Some bounds for transcendental operations are better for power series than for numbers. For instance, exponentials can be computed in time *> modulo |)>>. Using or arithmetic, when coefficients are fed one by one, solving power series equations is as efficient as evaluating them. For instance, in the Karatsuba model can be computed in time > using f*g>. In the FFT-model, one has to pay a small, non-constant price for relaxed multiplication (see Table). The numerical evaluation of analytic functions reduces into power series computations and the computation of effective error bounds. This can be used to solve many problems from numerical analysis with high precision. For instance, the reliable integration of an algebraic dynamical system between two fixed times and with a variable precision of bits takes |)>|)>> bit operations. See for other software packages for reliable numerical computations with high precision. Similar tables as Table can be compiled for other algebras. For instance, the inverse of an r> matrix with entries in a field> can be computed in time>|)>>. If > stands for the complexity of multiplying two operators in |]>> of degree d> in and order r> in>, then exact right division of two such operators can be done in time *log d|)>>. One big enemy of symbolic computation is . This phenomenon ruins the complexity of many algorithms. For instance, let be an r> matrix with formal entries > and consider the computation of|)>>|)>>. For 20>, the intermediate expression for> is prohibitively large, when fully expanded as a rational function in the parameters >. But the end-result>>is fairly small. The technique of proposes aremedy to this problem. In general, we start with a blackbox function , such as the above function that associates |)>>|)>> to |)>i,j\r>>. The function is usually given by a DAG that computes one or more polynomials or rational functions that involve many parameters. The DAG should be of reasonable size, so that it can be evaluated fast. In our example, the DAG might use Gaussian elimination to compute the inverses. Then sparse interpolation allows for the efficient symbolic reconstruction of the actual polynomials or rational functions computed by , whenever these are reasonably small. This is an old topic with a lot of recent progress; see for a nice survey. Let us focus on the case when computes a polynomial with rational coefficients. From a complexity perspective, the most important parameters are the size of as a DAG, the size of its support as a polynomial, and the bit size of the coefficients (the degree and dimension are typically small with respect to>, so we will ignore them for simplicity). Sparse interpolation goes in two stages: we first determine the support of , typically using aprobabilistic algorithm of Monte Carlo type. We next determine the coefficients. Now the first stage can be done modulo a sufficiently large prime >, so the complexity hardly depends on. Using Prony's algorithm, this stage first involves evaluations of of cost |)>>; using Cantor\UZassenhaus for polynomial root finding, we next need >*log s|)>=O s*|)>> operations to recover the support. Finally, the coefficients can be computed using transposed polynomial interpolation, in time *log s|)>>. The dominant term in the resulting complexity s|)>*+*log s|)>> depends on the asymptotic region we are in. The logarithmic terms are significant. For instance, if for the above matrix , then , >, >>, and >, whence the Cantor\UZassenhaus step dominates the cost. In, it was shown how to save a factor > by using the tangent Graeffe method; after that, the evaluation step dominates the cost. For basic arithmetic operations on sparse polynomials or low dimensional linear algebra with sparse polynomial coefficients, the evaluation cost can be amortized using multi-point evaluation; this essentially achieves >. In such situations, both Prony's algorithm and the tangent Graeffe algorithm become suboptimal. It then becomes better to evaluate directly at suitable roots of unity using FFTs, which allows for another > speed-up, also for the computation of the coefficients. In particular, sparse polynomial multiplication becomes almost as fast as dense multiplication. However, the technical details are subtle and can be found in. One central problem in symbolic computation is polynomial system solving. Now the mere decision problem whether there exists solution is a notorious NP-hard problem. However, most algorithms in computer algebra concern the computation of solutions, for various representations of solutions. The number of solutions may be huge, but the cost of finding them is often reasonable with respect to the size of the output. In this section, we focus on the complete resolution of a generic affine system of polynomial equations of degrees ,\,d>. For definiteness, we assume that the coefficients are in >, >, or >. For simplicity, we also assume that =\=d=d>. For such ageneric system, all solutions are isolated and simple and the number of solutions reaches the Bézout bound *\*d=d>>. There exist several approaches for polynomial system solving: <\itemize> Gröbner bases. Numeric homotopies. Geometric resolution. Triangular systems. We will restrict our exposition to the first three approaches. But we start with a survey of the related problem of multi-point evaluation, followed by an investigation of the univariate case, which calls for separate techniques. Let > be a field and > an algebra over >. The problem of multi-point evaluation takes \|]>=\,\,x|]>>> and ,\,\\|)>> as input and should produce |)>,\,P|)>> as output. This problem is closely related to polynomial system solving: given atentative solution of such a system, we may check whether it is an actual solution using multi-point evaluation. For complex or -adic solutions, we may also double the precision of an approximate solution through a combination of Newton's method and multi-point evaluation. Using the technique of remainder trees10>, it is classical that the univariate case with =\> can be solved in softly optimal time *N/d|)>> if d> and *d/N|)>> if N>. The special case when and > is an algebraic extension of > (typically of degree ) is called the problem of . This problem can be solved in time >+*>|)>> using Stockmeyer and Paterson's baby-step-giant-step technique; see and185>. The constant \1.5> is such that a\>> matrix over > may be multiplied with another\n> rectangular matrix in time >|)>>. One may take\1.629>. The same technique can be applied for other multi-point/modular evaluation problems3>. A breakthrough was made in 2008 by Kedlaya and Umans, who showed that modular composition and multi-point evaluation with =\> are related and that they can both be performed with quasi-optimal complexity. For instance, the complexity of modular composition is|)>>>>. Unfortunately, their algorithms do not seem to be efficient in practice; being non-algebraic, their approach is also not generally applicable in characteristic zero. In the past decade, Grégoire Lecerf and I have investigated several more practical alternatives for Kedlaya-Umans' algorithms. Over the integers and rationals, modular composition can be done in softly optimal time for large precisions. Over finite fields> with >>> and prime, it helps when > is composite; if > is smooth, then composition modulo a fixed irreducible polynomial of degree > can even be done in softly optimal time. There has also been progress on the problem of amortized multi-point evaluation, when the evaluation points are fixed. This has culminated in a softly optimal algorithm for any fixed dimension. It would be interesting to know whether general multi-point evaluation can be reduced to the amortized case. Consider the problem of computing the complex roots ,\,z> of a square-free polynomial \>. Obviously, this problem is equivalent to the problem to factor over>. If we have good approximations of the roots, then we may use Newton's method to double the precision of these approximations. Provided that our bit-precision is sufficiently large, this can be done for all roots together using multi-point evaluation, in softly optimal time *log d|)>>. Our problem thus contains two main difficulties: how to find good initial approximations and what to do for small bit-precisions? Based on earlier work by Schönhage, the root finding problem for large precisions was solved in asoftly optimal way by Pan. Assuming that |\|>\1>> for all , he essentially showed that can be factored with bits of precision in time d* d+log b|)>*|)>=|)>>. In particular, the soft optimality occurs as soon as >. The remaining case when > was solved recently by Moroz. He shows how to isolate the roots of in time |)>|)>>>, where > is a suitable condition number. A crucial ingredient of his algorithm is fast multi-point evaluation for small. Moroz shows that this can be done in time >, by improving drastically on ideas from3.2>. A refined analysis shows that his algorithm can be made to run in time *log d|)>> whenlog d>. For polynomials \>, one may also ask for an algorithm to factor over > instead of approximating its roots in >. The famous LLL-algorithm provided the first polynomial-time algorithm for this task. Apractically more efficient method was proposed in. For subsequent improvements on the complexity of lattice reduction and its application to factoring, see. When has coefficients in the finite field > with >> and prime, the best current general purpose factorization algorithm is (Las-Vegas) probabilistic and due to Cantor\UZassenhaus; it has (expected) complexity *log q*log d|)>>, where > stands for the cost of multiplying two polynomials of degree over> (assuming a plausible number-theoretic conjecture, we have =O*|)>>). Note that the bit-complexity of the Cantor\UZassenhaus algorithm is quadratic in . If splits over >, then the Tangent-Graeffe algorithm is often more efficient. In particular, if > is smooth, then the complexity drops to *log d|)>>. If > is large, then factorization over > can be reduced efficiently to modular composition. This yields the bound >*log> q+ q|)>> when using Kedlaya\UUmans' algorithm for modular composition. If> is composite, then describes faster algorithms for modular composition. If > is smooth and is bounded, then factorization can even be done in softly optimal time >. Macaulay matrices are a classical tool to reduce the resolution of generic polynomial systems to linear algebra. For affine equations of degree , the complexity of this method is *|\>>|)>>, where =n*d+1-n>> 26.15>. For a fixed dimension, this bound becomes |)>+O>|)>>. Many modern Gröbner basis implementations essentially use this method, while avoiding the computation of rows (or columns) that can be predicted to vanish. In particular, Faugère's F5 algorithm can be formulated in this way and runs in time |)>>>, uniformly in and . For ageneric system of equations of degree , it follows that we can compute aGröbner basis with quasi-cubic complexity in terms of the number > of solutions. Practical Gröbner basis computations are most efficient with respect to graded monomial orderings. But solutions of the system are more convenient to read from Gröbner bases with respect to lexicographical orderings. Changes of orderings for zero-dimensional systems can be performed efficiently using the FGLM algorithm. For a generic system, the complexity of arecent optimization of this method is >|)>>, where is the number of solutions. But is the computation of a Gröbner basis intrinsically alinear algebra problem? After all, gcds of univariate polynomials can be computed in softly optimal time. It turns out that better algorithms also exist in the bivariate case. One subtlety is that the standard representation of a Gröbner basis for a generic system with =d=d> already requires |)>> space with respect to agraded monomial ordering. In, it was shown that generic bivariate Gröbner bases can be computed in softly optimal time |)>> when using a suitable, more compact representation. Some softly optimal techniques also exist also for higher dimensions, such as fast relaxed reduction and heterogeneous Gröbner bases. It is unclear whether these techniques can be used to break the linear algebra barrier. <\remark*> Let ,f|)>> be the ideal generated by two generic bivariate polynomials of degree . The fast algorithm from for computing a Gröbner basis of also allows the resultant of > and > to be computed faster. Until recently, complexity bounds for this task were softly cubic |)>>. Over fields with finite characteristic, the complexity drops to >>, using fast bivariate multi-point evaluation. For general coefficient fields, Villard recently gave an algebraic algorithm for a slightly different problem that runs in time+o>>. When \\>, another popular method to solve polynomial systems is to use numeric homotopies. Given generic equations ,\,f> of degree , the idea is to find equations ,\,g> of the same degrees that can be solved easily and then to continuously deform the easy equations in the target equations while following the solutions. For instance, one may take =z-c> and =*f+g> for random constants ,\,c> and follow the solutions of =\=h> from until . The theoretical complexity of such methods has extensively been studied in the BSS model, in which arithmetic operations on real numbers with arbitrary precision can be done with unit cost. This model is suitable for well conditioned problems from classical numerical analysis, when all computations can be done using machine precision. This holds in particular for random polynomial systems. It has been shown in that one solution path can then be followed in expected average time *n*N|)>>, where n*> is the number of coefficients of ,\,f>> for the dense encoding. This bound has been lowered to*n*N|)>> in. A theoretical deterministic algorithm that runs in average polynomial time was given in. In practice, the observed number of homotopy steps seems to grow fairly slowly with and for random systems (a few hundred steps typically suffice), so the above bounds are pessimistic. However, we are interested in the computation of all solutions. When assuming that the average number of steps remains bounded, this gives rise to a complexity *d|)>>. If we also take into account the required bit precision , then the bit complexity becomes *d*|)>>. Fortunately, the bulk of the computation can usually be done with fixed precision and we may directly use Newton's method to repeatedly double the precision at the end . If we were able to develop a softly optimal algorithm for numeric multivariate multi-point evaluation, then the complexity would be+|)>*d|)>>>. Numeric homotopies perform particularly well for random systems, which behave essentially like generic ones. However, the existing software is less robust than algebraic solvers for degenerate systems, especially in presence of solutions with high multiplicities. It remains a practical challenge to develop more robust homotopy solvers for general systems and a theoretical challenge to understand the complexity of this problem. The technique of was developed in and further perfected in. It works over arbitrary fields> and polynomials that are given by aDAG of size . For simplicity, we assume that our system is generic. After a random linear change of variables, the idea is to successively solve the systems <\equation*> ,0,\,0|)>=0>>|,z,0,\,0|)>=f,z,0,\,0|)>=0>>|>>>>> in the form =a/q>, where is a formal parameter that satisfies =0> for \>. This representation of solutions is called the . It is also a with a special type of denominator. Let us focus on the last step which dominates the cost. The solutions =a/q> of the system <\equation*> ,\,z,0|)>>=\=f,\,z,0|)>=0 are first lifted into solutions =/q>> of the system <\equation*> f,\,z,t|)>>=\=,\,z,t|)>>=0. We next intersect with the hypersurface ,\,z,t|)>=0>>. It turns out that it suffices to work with power series in modulo|)>>. Then the intersection step gives rise to the computation of a large resultant, which can be done in time /d|)>>. Altogether it is shown in that the algorithm requires |)>> expected operations in >. Now > for a dense system of equations of degree. For large and fixed, we observe that |)>=|)>>. However, for fixed and large, the bound |)>=|)>> is only softly cubic. Using variants of Kedlaya\UUmans' algorithms for fast multi-point evaluation, the cost of the polynomial evaluations can be amortized, which leads to a quasi-quadratic bound>> over finite fields. When working over rational numbers, the bit precision generically grows with as well and the output is of size |)>>; in this case, the complexity bound actually becomes quasi-optimal6.11>. Can a complexity-driven approach help us to solve polynomial systems faster? In the above sections, we have seen that such an approach naturally leads to other questions: <\itemize> What is the complexity of multivariate multi-point evaluation? How can we take advantage of fast polynomial arithmetic? How does bit complexity rhyme with numerical conditioning and clusters of solutions? These questions are interesting for their own sake and they have indeed triggered at least some of the recent progress. <\acknowledgments*> I wish to thank Amir Hashemi and Fatima Abu Salem for their careful reading and suggestions. <\bibliography|bib|tm-plain|> <\bib-list|118> D.Armentano, C.Beltrán, P.Bürgisser, F.CuckerM.Shub. Condition length and complexity for the solution of polynomial systems. , 16:1401\U1422, 2016. M.Bardet, J.-C.FaugèreB.Salvy. On the complexity of the F5 Gröbner basis algorithm. , 70:49\U70, 2015. D.J.Bates, J.D.Hauenstein, A.J.SommeseC.W.Wampler. . SIAM, 2013. C.BeltránL.M.Pardo. Fast linear homotopy to find approximate zeros of polynomial systems. , 11:95\U129, 2011. M.Ben-OrP.Tiwari. A deterministic algorithm for sparse multivariate polynomial interpolation. , 301\U309. New York, NY, USA, 1988. A.Benoit, A.BostanJ.vander Hoeven. Quasi-optimal multiplication of linear differential operators. , 524\U530. New Brunswick, October 2012. IEEE. D.J.Bernstein. Removing redundancy in high precision Newton iteration. Available from , 2000. D.J.Bernstein. , 325\U384. Mathematical Sciences Research Institute Publications. Cambridge University Press, United Kingdom, 2008. L.Blum, F.Cucker, M.ShubS.Smale. . Springer-Verlag, 1998. A.Bostan, F.Chyzak, M.Giusti, G.Lecerf, B.SalvyÉ.Schost. . Auto-édition, 2017. A.Bostan, G.LecerfÉ.Schost. Tellegen's principle into practice. , 37\U44. Philadelphia, USA, August 2003. A.BostanÉ.Schost. Polynomial evaluation and interpolation on special sets of points. of Complexity>, 21(4):420\U446, August 2005. Festschrift for the 70th Birthday of Arnold Schönhage. R.P.Brent. Fast multiple-precision evaluation of elementary functions. , 23(2):242\U251, 1976. R.P.BrentH.T.Kung. Fast algorithms for manipulating formal power series. , 25:581\U595, 1978. R.P.BrentP.Zimmermann. . Cambridge University Press, 2010. B.Buchberger. . , University of Innsbruck, 1965. P.Bürgisser, M.ClausenM.A.Shokrollahi. . Springer-Verlag, Berlin, 1997. J.Canny, E.KaltofenY.Lakshman. Solving systems of non-linear polynomial equations faster. , 121\U128. Portland, Oregon, July 1989. D.G.CantorE.Kaltofen. On fast multiplication of polynomials over arbitrary algebras. , 28:693\U701, 1991. D.G.CantorH.Zassenhaus. A new algorithm for factoring polynomials over finite fields. , 36(154):587\U592, 1981. D.V.ChudnovskyG.V.Chudnovsky. Algebraic complexities and algebraic curves over finite fields. of Complexity>, 4:285\U316, 1988. D.V.ChudnovskyG.V.Chudnovsky. Computer algebra in the service of mathematical physics and number theory (Computers in mathematics, Stanford, CA, 1986). , 125, 109\U232. New-York, 1990. F.Chyzak, A.GoyerM.Mezzarobba. Symbolic-numeric factorization of differential operators. , HAL, 2022. S.A.Cook. The complexity of theorem-proving procedures. , 151\U158. 1971. J.W.CooleyJ.W.Tukey. An algorithm for the machine calculation of complex Fourier series. , 19:297\U301, 1965. J.Doliskani, P.Giorgi, R.LebretonÉ.Schost. Simultaneous conversions with the Residue Number System using linear algebra. , 44(3), 2018. Article27. C.Durvye. . , Univ. de Versailles (France), 2008. J.C.Faugère, P.Gianni, D.LazardT.Mora. Efficient computation of zero-dimensional Gröbner bases by change of ordering. , 16(4):329\U344, 1993. J.-C.Faugère. A new efficient algorithm for computing Gröbner bases (F4). , 139(1\U3):61\U88, 1999. J.-C.Faugère. A new efficient algorithm for computing Gröbner bases without reduction to zero (F5). T.Mora, , 75\U83. Lille, France, July 2002. J.-C.Faugère, P.Gaudry, L.HuotG.Renault. Sub-cubic change of ordering for Gröbner basis: a probabilistic approach. , 170\U177. Kobe, Japan, July 2014. C.M.Fiduccia. Polynomial evaluation via the division algorithm: the fast Fourier transform revisited. A.L.Rosenberg, , 88\U93. 1972. M.Frigo, C.E.Leiserson, H.ProkopS.Ramachandran. Cache-oblivious algorithms. , 285\U297. 1999. F.Le Gall. Powers of tensors and fast matrix multiplication. , 296\U303. Kobe, Japan, July 2014. J.vonzur GathenJ.Gerhard. . Cambridge University Press, New York, NY, USA, 3rd, 2013. M.Giesbrecht, Q.-L.HuangÉ Schost. Sparse multiplication of multivariate linear differential operators. , 155\U162. New York, USA, 2021. M.Giusti, K.Hägele, J.Heintz, J.E.Morais, J.L.MontañaL.M.Pardo. Lower bounds for diophantine approximation. , 117\U118:277\U317, 1997. M.Giusti, J.Heintz, J.E.MoraisL.M.Pardo. When polynomial equation systems can be \Psolved\Q fast? G.Cohen, M.GiustiT.Mora, , 948. Springer Verlag, 1995. T.Granlund etal. GMP, the GNU multiple precision arithmetic library. , 1991. B.Grenet, J.vander HoevenG.Lecerf. Randomized root finding over finite fields using tangent Graeffe transforms. , 197\U204. New York, NY, USA, 2015. ACM. G.Hanrot, M.QuerciaP.Zimmermann. The middle product algorithmI. speeding up the division and square root of power series. , 14:415\U438, 2004. G.HanrotP.Zimmermann. A long note on Mulders' short product. , 37(3):391\U401, 2004. G.H.Hardy. . Cambridge Univ. Press, 1910. D.Harvey. Faster exponentials of power series. 2009. . D.Harvey. Faster algorithms for the square root and reciprocal of power series. , 80:387\U394, 2011. D.Harvey. Faster truncated integer multiplication. , 2017. D.HarveyJ.vander Hoeven. Faster polynomial multiplication over finite fields using cyclotomic coefficient rings. of Complexity>, 54, 2019. Article ID 101404, 18 pages. D.HarveyJ.vander Hoeven. Integer multiplication in time >. , 193(2):563\U617, 2021. D.HarveyJ.vander Hoeven. Polynomial multiplication over finite fields in time >. , 69(2), 2022. Article12. M.van Hoeij. Factoring polynomials and the knapsack problem. , 95(2):167\U189, 2002. M.van HoeijA.Novocin. Gradual sub-lattice reduction and a new complexity for factoring polynomials. A.López-Ortiz, , 539\U553. Berlin, Heidelberg, 2010. Springer Berlin Heidelberg. J.vander Hoeven. Lazy multiplication of formal power series. W.W.Küchlin, , 17\U20. Maui, Hawaii, July 1997. J.vander Hoeven. Fast evaluation of holonomic functions. , 210:199\U215, 1999. J.vander Hoeven. Relax, but don't be too lazy. , 34:479\U542, 2002. J.vander Hoeven. The truncated Fourier transform and applications. , 290\U296. Univ. of Cantabria, Santander, Spain, July 2004. J.vander Hoeven. Fast composition of numeric power series. 2008-09, Université Paris-Sud, Orsay, France, 2008. J.vander Hoeven. Newton's method and FFT trading. , 45(8):857\U878, 2010. J.vander Hoeven. , 2, Calcul analytique. \ CEDRAM, 2011. Exp. No. 4, 85 pages, . J.vander Hoeven. Faster relaxed multiplication. , 405\U412. Kobe, Japan, July 2014. J.vander Hoeven. On the complexity of polynomial reduction. , 198, 447\U458. Cham, 2015. Springer. J.vander Hoeven. Faster Chinese remaindering. , HAL, 2016. . J.vander Hoeven. On the complexity of skew arithmetic. , 27(2):105\U122, 2016. J.vander Hoeven. From implicit to recursive equations. , 30(3):243\U262, 2018. J.vander Hoeven. Probably faster multiplication of sparse polynomials. , HAL, 2020. . J.vander HoevenR.Larrieu. Fast Gröbner basis computation and polynomial reduction for generic bivariate ideals. , 30(6):509\U539, 2019. J.vander Hoeven, R.LebretonÉ.Schost. Structured FFT and TFT: symmetric and lattice polynomials. , 355\U362. Boston, USA, June 2013. J.vander HoevenG.Lecerf. Modular composition via factorization. of Complexity>, 48:36\U68, 2018. J.vander HoevenG.Lecerf. Accelerated tower arithmetic. of Complexity>, 55, 2019. Article ID 101402, 26 pages. J.vander HoevenG.Lecerf. Sparse polynomial interpolation. Exploring fast heuristic algorithms over finite fields. , HAL, 2019. . J.vander HoevenG.Lecerf. Fast multivariate multi-point evaluation revisited. of Complexity>, 56, 2020. Article ID 101405, 38 pages. J.vander HoevenG.Lecerf. Ultimate complexity for numerical algorithms. , 54(1):1\U13, 2020. J.vander HoevenG.Lecerf. Amortized bivariate multi-point evaluation. , 179\U185. New York, NY, USA, 2021. ACM. J.vander HoevenG.Lecerf. Amortized multi-point evaluation of multivariate polynomials. , HAL, 2021. . J.vander HoevenG.Lecerf. Fast amortized multi-point evaluation. of Complexity>, 67, 2021. Article ID 101574, 15 pages. J.vander HoevenG.Lecerf. On the complexity exponent of polynomial system solving. , 21:1\U57, 2021. J.vander HoevenG.Lecerf. Univariate polynomial factorization over large finite fields. , 2022. . J.vander Hoeven, G.Lecerf, B.Mourrain etal. Mathemagix. 2002. . J.vander HoevenM.Monagan. Computing one billion roots using the tangent Graeffe method. , 54(3):65\U85, 2021. X.HuangV.Y.Pan. Fast rectangular matrix multiplication and applications. of Complexity>, 14(2):257\U299, 1998. F.Johansson. Arb: a C library for ball arithmetic. , 47(3/4):166\U169, 2014. E.Kaltofen. Symbolic computation and complexity theory. , 3\U7. Beijing, 2012. E.KaltofenV.Shoup. Fast polynomial factorization over high algebraic extensions of finite fields. , 184\U188. New York, NY, USA, 1997. ACM. A.KaratsubaJ.Ofman. Multiplication of multidigit numbers on automata. , 7:595\U596, 1963. K.S.KedlayaC.Umans. Fast polynomial factorization and modular composition. , 40(6):1767\U1802, 2011. P.Lairez. A deterministic algorithm to compute approximate roots of polynomial systems in polynomial average time. , 17:1265\U1292, 2017. F.Le GallF.Urrutia. Improved rectangular matrix multiplication using powers of the Coppersmith\UWinograd tensor. A.Czumaj, , 1029\U1046. Philadelphia, PA, USA, 2018. G.Lecerf. . , École polytechnique, 2001. G.LecerfÉ.Schost. Fast multivariate power series multiplication in characteristic zero. , 5(1):1\U10, 2003. A.K.Lenstra, H.W.LenstraL.Lovász. Factoring polynomials with rational coefficients. , 261:515\U534, 1982. M.Mezzarobba. Rigorous multiple-precision evaluation of D-Finite functions in SageMath. , HAL, 2016. . M.Mezzarobba. . , École polytechnique, Palaiseau, France. R.T.MoenckA.Borodin. Fast modular transforms via division. , 90\U96. Univ. Maryland, College Park, Md., 1972. N.Möller. On Schönhage's algorithm and subquadratic integer gcd computation. , 77(261):589\U607, 2008. M.Moreno Maza. Design and implementation of multi-threaded algorithms in polynomial algebra. , 15\U20. New York, NY, USA, 2021. ACM. G.Moroz. New data structure for univariate polynomial approximation and applications to root isolation, numerical multipoint evaluation, and other problems. . Denver, United States, 2022. T.Mulders. On short multiplication and division. , 11(1):69\U88, 2000. V.Neiger, J.RosenkildeG.Solomatov. Generic bivariate multi-point evaluation, interpolation and modular composition with precomputation. , 388\U395. New York, NY, USA, 2020. ACM. A.Novocin, D.StehléG.Villard. An LLL-Reduction algorithm with quasi-linear time complexity: extended abstract. , 403\U412. New York, NY, USA, 2011. H.J.Nussbaumer. . Springer-Verlag, 1981. V.Y.Pan. Univariate polynomials: nearly optimal algorithms for numerical factorization and root-finding. , 33(5):701\U733, 2002. C.H.Papadimitriou. . Addison-Wesley, 1994. M.S.PatersonL.J.Stockmeyer. On the number of nonscalar multiplications necessary to evaluate polynomials. Comput.>, 2(1):60\U66, 1973. G.Pólya. Kombinatorische Anzahlbestimmungen für Gruppen, Graphen und chemische Verbindungen. , 68:145\U254, 1937. W.H.Press, S.A.Teukolsky, W.T.VetterlingB.P.Flannery. . Cambridge University Press, 3rd, 2007. R.Prony. Essai expérimental et analytique sur les lois de la dilatabilité des fluides élastiques et sur celles de la force expansive de la vapeur de l'eau et de la vapeur de l'alkool, à différentes températures. , 1:24\U76, 1795. Cahier 22. M.Püschel, J.M.F.Moura, J.Johnson, D.Padua, M.Veloso, B.Singer, J.Xiong, F.Franchetti, A.Gacic, Y.Voronenko, K.Chen, R.W.JohnsonN.Rizzolo. SPIRAL: code generation for DSP transforms. , 93(2):232\U275, 2005. D.S.Roche. What can (and can't) we do with sparse polynomials? , 25\U30. New York, NY, USA, 2018. ACM. T.Roughgarden. . Cambridge University Press, 2021. F.Rouillier. Solving zero-dimensional systems through the rational univariate representation. , 9:433\U461, 1999. A.Schönhage. Schnelle Berechnung von Kettenbruchentwicklungen. , 1(2):139\U144, 1971. A.Schönhage. Schnelle Multiplikation von Polynomen über Körpern der Charakteristik 2. , 7:395\U398, 1977. A.Schönhage. The fundamental theorem of algebra in terms of computational complexity. , Math. Inst. Univ. of Tübingen, 1982. A.SchönhageV.Strassen. Schnelle Multiplikation groÿer Zahlen. , 7:281\U292, 1971. V.Shoup. NTL: a library for doing number theory. 1996. . V.Strassen. Gaussian elimination is not optimal. , 13:352\U356, 1969. A.L.Toom. The complexity of a scheme of functional elements realizing the multiplication of integers. , 4(2):714\U716, 1963. V.Vassilevska Williams. On some fine-grained questions in algorithms and complexity. , 4, 3465\U3506. Rio de Janeiro, 2018. G.Villard. On computing the resultant of generic bivariate polynomials. , 391\U398. 2018. <\initial> <\collection> > > > > <\attachments> <\collection> <\associate|bib-bibliography> <\db-entry|+qYhUkmR1lNMNvFz|misc|vdH:mmx> <|db-entry> G. B. > > <\db-entry|+ikO8ncBMk21kfy|book|Pap94> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0oS|inproceedings|Kal12> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0nu|inproceedings|Cook71> <|db-entry> > <\db-entry|+8lDHqURSvmeWwF|phdthesis|Buch65> <|db-entry> > <\db-entry|+8lDHqURSvmeWyA|article|Faug99> <|db-entry> > jcf/Papers/F99a.pdf> <\db-entry|+1CQ02y1d169CJ0pP|article|vdH:polexp> <|db-entry> G. > <\db-entry|+1CQ02y1d169CJ0q1|book|Rough21> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0pE|inproceedings|VW18> <|db-entry> > <\db-entry|+qYhUkmR1lNMNuzt|book|GaGe13> <|db-entry> J. > <\db-entry|+9izXaIC09Kv5rs|book|BZ10> <|db-entry> P. > <\db-entry|+9NrtYe3notwUPD|inproceedings|Gall14> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0q0|book|PTVF07> <|db-entry> S. A. W. T. B. P. > <\db-entry|+8lDHqURSvmeX80|inproceedings|vdH:issac97> <|db-entry> > > <\db-entry|+8lDHqURSvmeX87|article|vdH:relax> <|db-entry> > <\db-entry|+8lDHqURSvmeX48|article|Pol37> <|db-entry> > <\db-entry|+2KXhjR2JqOY31MD|misc|GMP> <|db-entry> > > <\db-entry|+1CQ02y1d169CJ0nq|techreport|CGM22> <|db-entry> A. M. > > <\db-entry|+5GzIBXD2NGy5wmY|techreport|vdH:genamp> <|db-entry> G. > > <\db-entry|+1CQ02y1d169CJ0qJ|article|vdH:ffnlogn> <|db-entry> J. van der > >> 12> <\db-entry|+1Hcl3U922Lc9q616|techreport|vdH:ffsparse> <|db-entry> G. > > <\db-entry|+1CQ02y1d169CJ0q6|article|vdH:ffcomp> <|db-entry> G. > of Complexity> <\db-entry|+1CQ02y1d169CJ0pU|article|vdH:fffact> <|db-entry> G. > > <\db-entry|+9izXaIC09Kv5t0|book|Har10> <|db-entry> > <\db-entry|+8lDHqURSvmeX6N|article|Str69> <|db-entry> > <\db-entry|+8lDHqURSvmeX1I|article|KU11> <|db-entry> C. > <\db-entry|+1CQ02y1d169CJ0qE|article|vdH:kucomp> <|db-entry> G. > of Complexity> <\db-entry|+BacBZ9audkKRNd|book|BCS97> <|db-entry> M. M. A. > <\db-entry|+1CQ02y1d169CJ0o5|inproceedings|FLPR99> <|db-entry> C. E. H. S. > <\db-entry|+22jpwSM91FqWXyZK|inproceedings|MM21> <|db-entry> > <\db-entry|+ewexkRy2Lp54uWw|article|Pue05> <|db-entry> J. M. F. J. D. M. B. J. F. A. Y. K. R. W. N. > <\db-entry|+2TBM2uQj20Lpoc1R|article|vdH:nlogn> <|db-entry> J. van der > >> <\db-entry|+8lDHqURSvmeX68|article|SS71> <|db-entry> V. > <\db-entry|+8lDHqURSvmeX5G|article|Sch77> <|db-entry> > <\db-entry|+qYhUkmR1lNMNuwz|article|CK91> <|db-entry> E. > <\db-entry|+1CQ02y1d169CJ0qA|article|vdH:cyclomult> <|db-entry> J. van der > of Complexity> <\db-entry|+8lDHqURSvmeX9l|inproceedings|vdH:fastrelax> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0pb|article|BS05> <|db-entry> É. > of Complexity> <\db-entry|+8lDHqURSvmeX9j|inproceedings|vdH:ldomul> <|db-entry> A. J. van der > <\db-entry|+1CQ02y1d169CJ0qC|article|vdH:tower> <|db-entry> G. > of Complexity> <\db-entry|+1Hcl3U922Lc9q617|techreport|vdH:smul> <|db-entry> > > <\db-entry|+8lDHqURSvmeX37|article|Mul00> <|db-entry> > <\db-entry|+8lDHqURSvmeWzp|article|HaZi04> <|db-entry> P. > <\db-entry|+PYz9J1FzcfqzTZ|unpublished|Har17> <|db-entry> > > <\db-entry|+QfXtZPU0IGdwf8|article|HaQuZi04> <|db-entry> M. P. > I. speeding up the division and square root of power series> <\db-entry|+1CQ02y1d169CJ0pw|article|LeSc03> <|db-entry> É. > <\db-entry|+2UJhhqqcr6IfMp|inproceedings|vdH:symtft> <|db-entry> R. É. > <\db-entry|+1CQ02y1d169CJ0pt|article|HuangPan1998> <|db-entry> V. Y. > of Complexity> <\db-entry|+1CQ02y1d169CJ0po|inproceedings|GallUrr18> <|db-entry> F. > > <\db-entry|+1CQ02y1d169CJ0pr|inproceedings|GHS21> <|db-entry> Q.-L. É > <\db-entry|+GMDX9xKJbGQwPe|article|CT65> <|db-entry> J. W. > <\db-entry|+8lDHqURSvmeX0g|article|Kar63> <|db-entry> J. > <\db-entry|+9izXaIC09Kv5v1|article|Toom63b> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0nz|article|DGLS16> <|db-entry> P. R. É. > 27> <\db-entry|+1Hcl3U922Lc9q60c|techreport|vdH:chinese> <|db-entry> > > <\db-entry|+9izXaIC09Kv5uC|book|Nuss81> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0q3|inproceedings|vdH:tft> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0pf|article|CC88b> <|db-entry> G. V. > of Complexity> <\db-entry|+9izXaIC09Kv5rV|article|BK78> <|db-entry> H. T. > <\db-entry|+22jpwSM91FqWXyZ6|misc|Bern:fnewton> <|db-entry> > > <\db-entry|+22jpwSM91FqWXyZ4|inbook|Bern08> <|db-entry> > P. > <\db-entry|+1CQ02y1d169CJ0oq|misc|NTL> <|db-entry> > > <\db-entry|+qYhUkmR1lNMNvD9|article|vdH:fnewton> <|db-entry> > <\db-entry|+QfXtZPU0IGdwfA|article|Har09a> <|db-entry> > <\db-entry|+8lDHqURSvmeWzk|misc|Har09b> <|db-entry> > > <\db-entry|+OcAPOVBj6pZ8yi|article|Sch71> <|db-entry> > <\db-entry|+2DZx1AuEMe|article|Mol07> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0o4|inproceedings|Fid72> <|db-entry> > > <\db-entry|+GMDX9xKJbGQwQk|inproceedings|MB72> <|db-entry> A. > <\db-entry|+9izXaIC09Kv5rh|article|Br76b> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0pg|inproceedings|CC90> <|db-entry> G. V. > <\db-entry|+8lDHqURSvmeX88|article|vdH:hol> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0px|phdthesis|Mezza:phd> <|db-entry> > <\db-entry|+2KXhjR2JqOY31NE|article|vdH:newimpl> <|db-entry> > <\db-entry|+8lDHqURSvmeX9c|inbook|vdH:jncf> <|db-entry> > > <\db-entry|+9NrtYe3notwUPH|article|Joh14> <|db-entry> > <\db-entry|+2TBM2uQj20Lpoc1m|techreport|Mezza16> <|db-entry> > > <\db-entry|+TM239x3GmAFtoB|article|vdH:skewcompl> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0pz|article|Pro1795> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0nQ|inproceedings|BenOrTiwari1988> <|db-entry> P. > <\db-entry|+1CQ02y1d169CJ0qL|inproceedings|CKL89> <|db-entry> E. Y. > <\db-entry|+qYhUkmR1lNMNv8R|inproceedings|Roche2018> <|db-entry> > <\db-entry|+1hEca2LzyXL|article|CZ81> <|db-entry> H. > <\db-entry|+1CQ02y1d169CJ0pZ|inproceedings|BoLeSc2003> <|db-entry> G. É. > <\db-entry|+FzaTvJ4dDvy37X|inproceedings|vdH:rmodroots> <|db-entry> J. van der G. > <\db-entry|+2GwKI2cC86Ig9ZH|article|vdH:graeffe-cca> <|db-entry> M. > <\db-entry|+JrEjrvbWMxse2i|article|PaSt73> <|db-entry> L. J. > Comput.> <\db-entry|+1CQ02y1d169CJ0oQ|inproceedings|KaltofenShoup1997> <|db-entry> V. > <\db-entry|+1CQ02y1d169CJ0q7|article|vdH:ucomplexity> <|db-entry> G. > <\db-entry|+1CQ02y1d169CJ0qI|article|vdH:amp> <|db-entry> G. > of Complexity> <\db-entry|+1Q734gUk1nVyv65R|inproceedings|NRS20> <|db-entry> J. G. > <\db-entry|+2GwKI2cC86Ig9ZL|inproceedings|vdH:biamp> <|db-entry> G. > <\db-entry|+8lDHqURSvmeX5J|techreport|Sch82b> <|db-entry> > <\db-entry|+8lDHqURSvmeX3p|article|Pan02> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0ok|inproceedings|Mor22> <|db-entry> > <\db-entry|+8lDHqURSvmeX99|techreport|vdH:fastcomp> <|db-entry> > <\db-entry|+9izXaIC09Kv5tm|article|LLL82> <|db-entry> H. W. L. > <\db-entry|+22jpwSM91FqWXyZQ|article|vH02> <|db-entry> > <\db-entry|+22jpwSM91FqWXyZJ|inproceedings|HN10> <|db-entry> A. > > <\db-entry|+1CQ02y1d169CJ0py|inproceedings|NSV11> <|db-entry> D. G. > <\db-entry|+1CQ02y1d169CJ0nO|book|BCGLSS17> <|db-entry> F. M. G. B. É. > <\db-entry|+1CQ02y1d169CJ0pm|inproceedings|Faug02> <|db-entry> > > <\db-entry|+1CQ02y1d169CJ0nZ|article|BFS15> <|db-entry> J.-C. B. > <\db-entry|+9izXaIC09Kv5sU|article|FGLM93> <|db-entry> P. D. T. > jcf/Papers/FGLM.pdf> <\db-entry|+1CQ02y1d169CJ0pn|inproceedings|FGHR14> <|db-entry> P. L. G. > <\db-entry|+YXseyEg1Ca84l5t|article|vdH:ggg> <|db-entry> R. > <\db-entry|+1CQ02y1d169CJ0q4|inproceedings|vdH:sparsered> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0q2|inproceedings|Vil18> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0nX|book|Bertini:book> <|db-entry> J. D. A. J. C. W. > <\db-entry|+8lDHqURSvmeWvd|book|BlCuShSm1998> <|db-entry> F. M. S. > <\db-entry|+1CQ02y1d169CJ0nj|article|BP11> <|db-entry> L. M. > <\db-entry|+1CQ02y1d169CJ0nG|article|ABBCS16> <|db-entry> C. P. F. M. > <\db-entry|+1CQ02y1d169CJ0oZ|article|Lair17> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0pq|inproceedings|GHMP95> <|db-entry> J. J. E. L. M. > M. T. > <\db-entry|+1CQ02y1d169CJ0pp|article|GHHMMP97> <|db-entry> K. J. J. E. J. L. L. M. > <\db-entry|+1CQ02y1d169CJ0pu|phdthesis|Lecerf:phd> <|db-entry> > <\db-entry|+8lDHqURSvmeWxi|phdthesis|Dur:phd> <|db-entry> > <\db-entry|+1CQ02y1d169CJ0p1|article|Roui99> <|db-entry> > <\references> <\collection> > > > > > > > > > > > > > > > > > > |\>|8>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> vdH:mmx Pap94 Kal12 Cook71 Buch65 Faug99 vdH:polexp Rough21 VW18 Kal12 GaGe13 BZ10 Gall14 PTVF07 vdH:issac97 vdH:relax Pol37 GMP CGM22 Rough21 Rough21 vdH:genamp vdH:ffnlogn vdH:ffsparse vdH:ffcomp vdH:fffact Har10 GaGe13 Gall14 Str69 KU11 vdH:kucomp Pap94 BCS97 FLPR99 MM21 Pue05 vdH:nlogn SS71 Sch77 CK91 vdH:cyclomult vdH:ffnlogn Gall14 vdH:fastrelax vdH:fastrelax BS05 vdH:ldomul vdH:ldomul vdH:tower vdH:ffsparse vdH:smul Mul00 HaZi04 Har17 HaQuZi04 BZ10 LeSc03 vdH:symtft HuangPan1998 GallUrr18 GHS21 CT65 Kar63 Toom63b DGLS16 vdH:chinese SS71 Nuss81 vdH:tft CC88b BK78 Bern:fnewton Bern08 NTL vdH:fnewton Har09a Har09b vdH:chinese vdH:fnewton Har09a Har09a vdH:chinese Sch71 Mol07 Fid72 MB72 Br76b CC90 vdH:hol Mezza:phd BZ10 vdH:relax vdH:newimpl vdH:jncf Joh14 Mezza16 Str69 vdH:skewcompl Pro1795 BenOrTiwari1988 CKL89 Roche2018 Pro1795 CZ81 CKL89 BoLeSc2003 vdH:rmodroots vdH:graeffe-cca vdH:ffsparse vdH:smul GaGe13 PaSt73 KaltofenShoup1997 GallUrr18 vdH:genamp vdH:tower KU11 vdH:kucomp vdH:ucomplexity vdH:ffcomp vdH:amp NRS20 vdH:biamp vdH:genamp Sch82b Pan02 Mor22 vdH:fastcomp LLL82 vH02 HN10 NSV11 CZ81 vdH:rmodroots vdH:graeffe-cca KaltofenShoup1997 vdH:ffcomp vdH:fffact BCGLSS17 Faug02 BFS15 FGLM93 FGHR14 vdH:ggg vdH:sparsered vdH:genamp vdH:ggg KU11 Vil18 Bertini:book BlCuShSm1998 BP11 ABBCS16 Lair17 GHMP95 GHHMMP97 Lecerf:phd Dur:phd Roui99 Lecerf:phd vdH:polexp vdH:polexp <\associate|table> |1>|> Statistics for the number of papers in ISSAC proceedings that contain benchmarks, a complexity analysis, nothing of this kind, or both. |> |2>|> Best known complexity bounds for multiplying integers, polynomials, and matrices. |> |3>|> Best known complexity bounds for multiplication in various algebras. In the last bound, |s> stands for the total size of the supports of the input and output. |> |4>|> Complexity bounds for various basic operations on |n>-bit integers and/or floating point numbers. The first four bounds assume the FFT-model. |> <\associate|toc> |math-font-series||Abstract> |.>>>>|> |math-font-series||1Introduction> |.>>>>|> |math-font-series||2Two extremist views on complexity theory> |.>>>>|> |2.1Traditional complexity theory |.>>>>|> > |2.2Take your stopwatch |.>>>>|> > |math-font-series||3Symbolic computation> |.>>>>|> |3.1Complexity at ISSAC |.>>>>|> > |3.2Fundamental complexities |.>>>>|> > |3.3Back to our stopwatch |.>>>>|> > |math-font-series||4Basic arithmetic> |.>>>>|> |4.1Multiplication |.>>>>|> > |4.2Basic operations |.>>>>|> > |4.3Sparse interpolation |.>>>>|> > |math-font-series||5Polynomial system solving> |.>>>>|> |5.1Multi-point evaluation |.>>>>|> > |5.2Univariate polynomials |.>>>>|> > |5.3Gröbner bases |.>>>>|> > |5.4Numeric homotopies |.>>>>|> > |5.5Geometric resolution |.>>>>|> > |5.6Conclusion |.>>>>|> > |math-font-series||Bibliography> |.>>>>|>