> <\body> of rational functions and gcds>|<\doc-note> This paper is part of a project that has received funding from the French \PAgence de l'innovation de défense\Q. ||<\author-affiliation> CNRS, École polytechnique, Institut Polytechnique de Paris Laboratoire d'informatique de l'École polytechnique (LIX, UMR 7161) 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau, France > \; >>|||<\author-affiliation> CNRS, École polytechnique, Institut Polytechnique de Paris Laboratoire d'informatique de l'École polytechnique (LIX, UMR 7161) 1, rue Honoré d'Estienne d'Orves Bâtiment Alan Turing, CS35003 91120 Palaiseau, France >>|>||<\doc-note> > In this note, we present a variant of a probabilistic algorithm by Cuyt and Lee for the sparse interpolation of multivariate rational functions. We also present an analogous method for the computation of sparse gcds. > Let \,\,x|)>> be a rational function over an effective field >, presented as a blackbox that can be evaluated at points in >. The problem of is to recover in its usual representation, as a quotient of the form <\equation*> f=,A=i\s>a*\>,B=i\s>b*\>, where \\\> and \\> for ,s>, where \\\> and \\> for ,s>>, where =1>, and where >=x>*\*x>> for any =,\,\|)>\\>. Here, \Psparse\Q means that we use a sparse representation for multivariate polynomials: only the set of the non-zero terms is stored and each term is represented by a pair made of a coefficient and an exponent vector. When using a sparse representation, the natural complexity measures are the number of non-zero terms and the maximum bit size of the exponents. On the contrary, a dense representation of a polynomial \,\,x|]>>, say of total degree , is the vector of all its coefficients up to degree for a prescribed monomial ordering. The sparse interpolation of a blackbox polynomial has been widely studied in computer algebra, and very efficient algorithms are known. In this paper, we will use this polynomial case as a basic building block, with an abstract specification. We refer to for state of the art algorithms for polynomial sparse interpolation and further historical references. Our main problem here is to find an efficient reduction of the general sparse interpolation problem for rational functions to the special case of polynomials. We focus on the case when the total degrees > and > are \Pmodest\Q with respect to the total number of terms+s>>. This is typically the case for applications in computer algebra. We also recall that algorithms in this area are usually probabilistic, of Monte Carlo type; our new algorithms are no exception. In a similar vein, we assume that the cardinality of> is \Psufficiently large\Q, so that random points in > are \Pgeneric\Q with high probability. One efficient reduction from sparse rational function interpolation to sparse polynomial interpolation was proposed by Cuyt and Lee. Our new method is a variant of their algorithm; through the use of projective coordinates, we are able to prove a better complexity bound: see section. Computations with rational functions are intimately related to gcd computations. In section, we show that our approach also applies to gcd computations: given a blackbox function for the computation of two sparse polynomials and, we present a new complexity bound for the sparse interpolation of>>. We also present an alternative heuristic algorithm for the case when and are given explicitly, in sparse representation. For simplicity, all time complexities are measured in terms of the required number of operations in >. Since our algorithms are probabilistic, we assume that a random element in > can be produced in unit time. Our complexity bounds are valid in several complexity models. For definiteness, one may assume a probabilistic RAM model that supports operations in>. The algorithms in this paper all rely on a given base algorithm for the interpolation of sparse polynomials. For a polynomial in n> variables of total degreed> withs> terms, given by a blackbox function , we assume that this algorithm requires >>> evaluations of , as well as >> additional operations for the actual interpolation from the latter values of . We recall that this algorithm is allowed to be probabilistic, of Monte Carlo type. \ As a particular example of interest, consider the case when =\>, and let >> be the cost to multiply two polynomials of degree s> in >. In a suitable asymptotic region where and do not grow too fast, Prony's traditional probabilistic geometric progression technique yields <\eqnarray*> >>>||>|>>||>*log s*log q|)>.>>>> For FFT-primes , one may also apply the tangent Graeffe method, which yields <\eqnarray*> >>||>*log s|)>.>>>> We refer to for a survey of recent complexity bounds for sparse polynomial interpolation. In what follows, we always assume that >/s> and >/s> are non-decreasing in . Consider a polynomial or rational blackbox function . We assume that a blackbox function over > only involves field operations in > (which includes equality testing and branching, if needed), and that the number of these operations is uniformly bounded. \ Efficient algorithms for the sparse interpolation of typically evaluate at a specific sequence of points, such as a geometric progression. For the algorithms in this paper, it is convenient to systematically make the assumption that we only evaluate at points ,\,a|)>\\> with \0> for ,n>. This assumption holds in particular for points in geometric progressions. One problem with the evaluation of a blackbox function that involves divisions is that the evaluation may fail whenever a division by zero occurs. An evaluation point for which this happens is colloquially called a . Such bad points must be discarded and the same holds for sequences of evaluation points that contain at least one bad point. Of course, it would in principle suffice to choose another sequence of evaluation points. However, in general, sufficiently generic sequences are not readily at our disposal with high probability. This problem may be overcome using randomization, . Since our blackbox function only involves a bounded number of field operations in>, the set of bad evaluation points is a constructible subset of>. We assume that this subset is not Zariski dense. Consequently, there exists a polynomial \,\,x|]>> such that ,\,a|)>=0>> whenever ,\,a|)>> is a bad point. Given ,\,a|)>\\|)>>, the probability that a point ,\,\|)>\\|)>> satisfies <\equation*> Z*\,\,a*\|)>\0 is at least <\equation*> 1-|\|>-1>, by the Schwartz\UZippel lemma6.44>, where|\|>> denotes the cardinality of>. Now assume that we are given a method for the sparse interpolation of any blackbox function with the same support as . The evaluation points for such a method form afixed finite subset of \|)>>. If |\|>> is sufficiently large, then it follows with high probability that *\,\,a*\|)>\0> for all ,\,a|)>\E>, when taking ,\,\|)>\\|)>>> at random. We may thus interpolate <\equation*> ,\,x|)>\*x,\,\*x|)>> instead of , with high probability. Since ,\,\> are non-zero, we may next recover the sparse interpolation of ,\,x|)>=*x,\,\*x|)>> itself. This section concerns the sparse interpolation of a rational function \,\,x|)>> given as a blackbox. The natural idea to interpolate a univariate rational function \> given as a blackbox is to evaluate it at many points and then use Cauchy interpolation5.18> in combination with the fast Euclidean algorithm11>; this idea appears in early works on the topic. If the numerator and denominator of have respective degrees > and >, then the total cost decomposes into +d+2> evaluations of and <\equation*> O>+d|)>*log+d|)>|)>=+d|)> operations in > for the Cauchy interpolation. Here |)>> is the usual shorthand for *log>|)>|)>>, as in. Interpolating rational functions seems more difficult than polynomials. In high degree, it is still a challenging problem to design efficient algorithms in terms of the sparse size +s> of the function. An algorithm with exponential time complexity is known from. In the multivariate case, the interpolation problem can be handled as a dense one with respect to one of the variables and as a sparse one with respect to the other variables. Although no general polynomial complexity bound is known in terms of the sparse size, this approach is extremely useful in practice, as soon as one of the variables has small degree. Roughly speaking, one proceeds as follows: for a fixed point,\,a|)>> we reconstruct the univariate function ,\,a,x|)>>> with the dense representation in >. Then we regard the numerator and the denominator as polynomials in ,\,x|]>|]>>>, and it remains to interpolate their sparse coefficients using sufficiently many evaluation points,\,a|)>>>. The difficulty with the latter interpolation is that the values of ,\,a,x|)>> and ,\,a,x|)>> are only determined up to a scalar in >. In order to compute this scalar, anormalization assumption is necessary. For instance we may require to be monic in>, which can be achieved through a random linear change of coordinates. Another kind of normalization has been proposed by Cuyt and Lee: they introduce a new auxiliary variable to decompose and into their homogeneous components. Their approach is sketched below. Let us finally mention, which addresses the special case =\> using a specific strategy. An efficient reduction of sparse interpolation from rational functions to polynomials was proposed by Cuyt and Lee in. Their approach starts by introducing a homogenizing variable and <\equation*> f*u,\,x*u|)>=+A,\,x|)>*u+\+A>,\,x|)>*u>|B+B,\,x|)>*u+\+B>,\,x|)>*u>>. With high probability, they next reduce to the case when > is non-zero, by choosing arandom point ,\,\|)>\\> and performing a shift: <\eqnarray*> ,\,x,u|)>>|>|*u+\,\,x*u+\|)>>>|||+,\,x|)>*u+\+>,\,x|)>*u>|+,\,x|)>*u+\+>,\,x|)>*u>>.>>>> For a fixed point ,\,a|)>\\>, we may consider ,\,a,u|)>> as a univariate rational function in , which can generically be interpolated from +d+2> evaluations for different values of . The first main idea from is to normalize the resulting rational function in such a way that the constant coefficient of the denominator is equal to one: <\equation*> ,\,x,u|)>=|>+,\,x|)>|>*u+\+>,\,x|)>|>*u>|1+,\,x|)>|>*u+\+>,\,x|)>|>*u>>. This yields blackbox functions that take ,\,a> as input and produce <\equation*> |>,,\,a|)>|>,\,>,\,a|)>|>,,\,a|)>|>,\,>,\,a|)>|> as output. We may now use sparse polynomial interpolation for each of these blackbox functions and determine the sparse expressions for the above polynomials. We finally obtain ,\,x|)>=-\,\,x-\,1|)>> using a simple shift. One complication here is that shifts <\equation*> ,\,x|)>\,\,x|)>+,\,\|)> destroy the sparsity. The second main observation by Cuyt and Lee is that this drawback only occurs for the non-leading terms: the coefficient >> (resp. >>) coincides with >> (resp. >>). Using this observation recursively in aclever way, it is possible to obtain the non-leading coefficients as well, using at most <\equation*> +d+2|)>>*|>,s|)>,>,s|)>|)>|\>> blackbox evaluations2.2>. However, even though sparse interpolations of shifted polynomials can be avoided, the resulting algorithm still requires evaluations of shifted polynomials in their interpolated form: see the remarks at the end of2.2>>. In fact, the focus of is on minimizing the number of blackbox evaluations; the overall complexity of the algorithm is not analyzed in detail. Instead of using the recursive method from2.2> for determining the non-leading coefficients, we propose to avoid the mere existence of such terms altogether. At the start of our algorithm, we assume that the total degrees > and > of the numerator and the denominator of are known. If we are only given a rough upper bound on these degrees, then we may reduce to this case as follows: take a point ,\,a|)>\\>> at random, and interpolate ,\,a,u|)>> as a univariate rational function in, from > evaluations at different values of . With high probability, > and > coincide with the degrees of the numerator and denominator of ,\,a,u|)>>. Our key observation is the following: if and are homogeneous polynomials then >=A> and >=B>, so Cuyt and Lee's method mostly reduces to two sparse polynomial interpolations. Our method is therefore to force the polynomials and to become homogeneous by introducing anew projective coordinate >. More precisely, we introduce the rational function <\equation*> ,\,x|)>\x-d>*f/x,\,x/x|)>, and we write <\equation*> ,\,x|)>=,\,x|)>|,\,x|)>>, where <\equation*> ,\,x|)>\x>*A/x,\,x/x|)>,\,x|)>\x>*B/x,\,x/x|)>. The numerator ,\,x|)>> and denominator ,\,x|)>> of ,\,x|)>> are both homogeneous and coprime. One evaluation of > requires one evaluation of plus -d|\|>|)>>> operations in >. For some random shift ,\,\|)>\\>>, we next consider <\eqnarray*> ,\,x,u|)>>|>|*u+\,\,x*u+\|)>>>|||+,\,x|)>*u+\+>,\,x|)>*u>,>>|,\,x,u|)>>|>|*u+\,\,x*u+\|)>>>|||+,\,x|)>*u+\+>,\,x|)>*u>,>>|,\,x,u|)>>|>|*u+\,\,x*u+\|)>.>>>> One evaluation of > requires one evaluation of > plus > operations in >. Before addressing the interpolation algorithm for , let us show that /> is the canonical representative of >. <\lemma> If \0>, then > and > are coprime. <\proof> We consider the following weighted degree: <\equation*> wdeg>*\*x>*u|)>=\+\+\-e. For this weight, > and > are quasi-homogeneous, whence so are their respective irreducible factors and therefore their gcd, written >. Consequently, there exists a non-negative integer such that <\equation*> ,\,x,u|)>=u**u,\,x*u,1|)>. By construction, ,\,x,1|)>> and ,\,x,1|)>> are coprime, whence ,\,x,1|)>> is a non-zero constant and ,\,x,u|)>=c*u>. Since \0>, we conclude that . <\theorem> Let \,\,x|)>> be a rational blackbox function whose evaluation requires operations in >. Let +s>, where > and > are bounds for the number of terms of and . Let +d>, where > and > are the total degrees of and . Using aprobabilistic algorithm of Monte Carlo type, the sparse interpolation of takes at most <\equation*> d|)>|)>**>+>+O operations in >, provided that the cardinality of > is sufficiently large. <\proof> First of all, a point ,\,\|)>> is taken at random, so =,\,\|)>\0> holds with high probability. Let us describe a blackbox for evaluating >/> and >/> at apoint ,\,a|)>>: <\itemize> We perform +d+2>> evaluations of ,\,a,u|)>> for different values of , which give rise to at most <\equation*> -d|\|>|)>|)>*+d+2|)>=|)>* operations in >. If the latter evaluations do not fail, then we perform the univariate Cauchy interpolation of ,\,a,u|)>>, which takes >*log d|)>=d* d|)>> operations in >. If this Cauchy interpolation fails, then the evaluation of >/> and >/> aborts. The failure of the Cauchy interpolation of ,\,a,u|)>> depends on the actual procedure being used, but we may design it to succeed on a Zariski dense subset of points ,\,a|)>>> in>. For instance, with the aforementioned method of5.18>, the reconstruction succeeds whenever the degrees of the reconstructed numerator and denominator are > and >, respectively. This holds when ,\,a|)>\0>, where <\equation*> R,\,x|)>\Res,\,x,u|)>,,\,x,u|)>|)>. Since > is primitive as an element of ,\,x|]>>, the gcd of > and > in ,\,x,u|]>>> can be obtained as the primitive representative in ,\,x|]>> of the gcd of > and> in ,\,x|)>>; seeIV, Theorem2.3>, for instance. Using Lemma, we deduce that is not identically zero. Consequently, the evaluation of >/> and >/> at a point ,\,a|)>> does not fail on a Zariski dense subset of >. In order to ensure the sparse polynomial interpolation to succeed with high probability, we appeal to the discussion from section for the evaluations of >/> and >/>. More precisely, by taking ,\,\|)>>\\|)>> at random, we can interpolate >*x,\,\*x|)>/> and >*x,\,\*x|)>/> with high probability. This random dilatation further increases the cost of each blackbox evaluation by operations in >. The required number of evaluation points ,\,a|)>> is <\equation*> max>,s|)>,>,s|)>|)>\>. Once all evaluations are done, we need <\equation*> >,s|)>+>,s|)>\> further operations to reconstruct the sparse representations of >*x,\,\*x|)>/> and >*x,\,\*x|)>/>. The backward dilatation to recover >,\,x|)>/> and >,\,x|)>/> takes <\equation*> O*log d+s*log d|)>|)>=O further operations in >. To conclude the proof, we observe that >> and >> coincide with >> and >>. But since > are > are homogeneous, this means that /> and /> actually coincide with >/> and >/>. <\remark> One nice aspect of Cuyt and Lee's algorithm is that the number of blackbox evaluations can be of the order of <\equation*> +|)>*> if the terms of and are more or less equally distributed over their homogeneous components. With our method, we always pay the maximal possible overhead>> with respect to sparse interpolation of polynomials. Let \,\,x|]>> be polynomials that are given through a common blackbox. In this section we show how to adapt the technique from the previous section to the computation of gcd>. A classical approach for multivariate gcds relies on evaluation/interpolation for all but one of the variables, say >, and thus on the computation of many univariate gcds; see for instance6>. This turns out to be similar to rational function interpolation: the univariate case is handled using dense representations and a suitable normalization is necessary to recombine the specialized gcds. In general and without generic coordinates, the gcd does not admit a natural normalization. This issue has been studied in. Basically one piece of their solution is to consider and as polynomials in ,\,x|)>|]>> and to force to be monic in >. This leads to interpolate rational functions instead of polynomials, and the technique of the previous section may be used. But it is even more efficient to adapt our main idea for rational functions to gcds. The sparse gcd problem was first investigated by Zippel, who proposed a variable by variable approach. Then, subresultant polynomials turned out to be useful to avoid rational function interpolations. For historical references we refer the reader to. Software implementations and a recent state of the art can be found in. Let us mention that probabilistic sparse gcd algorithms may be turned into Las Vegas type algorithms by interpolating the full Bézout relation, at the price of an increased computational cost; see for instance6>. In this paper we content ourselves with Monte Carlo algorithms and we focus on gcd computations only. Let \,\,x|]>> be polynomials that are given through a common blackbox, and assume that gcd> contains terms. We let \deg P>, \deg Q>, deg G>. Using the technique from the previous section we first reduce to the case when and are homogeneous (without increasing their total degrees), so we let: <\eqnarray*> ,\,x|)>>|>|>*P/x,\,x/x|)>>>|,\,x|)>>|>|>*Q/x,\,x/x|)>>>|,\,x|)>>|>|*G/x,\,x/x|)>.>>>> For ,\,\|)>\\>, we let <\eqnarray*> ,\,x,u|)>>|>|*u+\,\,x*u+\|)>>>|,\,x,u|)>>|>|*u+\,\,x*u+\|)>>>|,\,x,u|)>>|>|*u+\,\,x*u+\|)>>>|||+,\,x|)>*u+\+,\,x|)>*u.>>>> <\example> With , >, *x>, =\=\=0>, we have =x*u> and =x*x*u>. We note that >, and then that ,x,x,u|)>=x*u>. However ,|)>=x*u>. The latter example shows that we need to take care of the relationship between > and ,|)>>. This is the purpose of the following lemma, which extends Lemma. <\lemma> If ,\,\|)>\0> or ,\,\|)>\0>, then =gcd,|)>>. <\proof> Let \gcd,|)>>. In the proof of Lemma we have shown that <\equation*> ,\,x,u|)>=u**u,\,x*u,1|)>, for some \>. By construction, we have =gcd,|)>>, so > divides >. Conversely, ,\,x,1|)>> divides ,\,x,1|)>>, so there exists a non-zero constant \> with <\equation*> ,\,x,1|)>=c*,\,x,1|)>. It follows that <\equation*> ,\,x,u|)>=*u,\,x*u,1|)>=c**u,\,x*u,1|)>=c*u*,\,x,u|)>. Since ,\,\|)>\0> or ,\,\|)>\0>, we finally have =G,\,\|)>\0>, so that . From now on we assume that ,\,\|)>\0> or ,\,\|)>\0>, so that > or > is primitive as an element in ,\,x|]>>. By Lemma andIV, Theorem2.3>, their gcd > in ,\,x,u|]>> can be obtained as the primitive representative> in ,\,x|]>>> of the gcd of > and > in ,\,x|)>>. We further observe that ,\,x,0|)>\\\>, whence <\equation> ,\,x,u|)>|>=,\,x,u|)>|,\,x,0|)>>. This allows us to obtain <\equation*> ,\,x,u|)>|>=1+,\,x|)>|>*u+\+,\,x|)>|>*u as the gcd of > and > in ,\,x|)>>, and to recover > via <\equation*> ,\,x|)>|>=,\,x|)>|>. This approach yields the following complexity bound for the sparse interpolation of,\,x|)>/>. <\theorem> Consider a blackbox function that computes the two polynomials and in ,\,x|]>>> using operations in >. Let >, >, and min,d|)>> be the respective total degrees of , , and gcd>. Let max,d|)>>. Using a probabilistic algorithm of Monte Carlo type, we may interpolate using <\equation*> D|)>|)>**>+>+O operations in >, provided that the cardinality of > is sufficiently large. <\proof> We first take ,\,\|)>> at random, so ,\,\|)>\0> or ,\,\|)>\0> with high probability. In order to evaluate ,\,x|)>/> at a point ,\,a|)>> we proceed as follows: <\itemize> We evaluate ,\,a,u|)>> and ,\,a,u|)>> for > different values of . The evaluation of >>, >>, /x,\,x/x> takes <\equation*> n+O+log d|)> operations in >. The evaluation of *u+\,\,x*u+\> involves > more operations. Therefore, one evaluation of > and > amounts to <\equation*> L+3*n+2+O+log d|)>=L+3*n+O operations in >. We interpolate ,\,a,u|)>> and ,\,a,u|)>>, and then compute <\equation*> g\gcd,\,a,u|)>,,\,a,u|)>|)>. This requires >*log D|)>=D* D|)>>> operations in >. By(), the normalization /g> coincides with ,\,a,u|)>/> whenever ,\,a,u|)>|)>>=d>, ,\,a,u|)>|)>=d>, and the subresultant coefficient of degree of ,\,a,u|)>> and ,\,a,u|)>> is non-zero: see6> or, for instance. In this way, we have built a blackbox that evaluates /> on a Zariski dense subset of>. In order to ensure that the sparse interpolation of /> succeeds with high probability, we appeal to the discussion from section. Precisely, we take ,\,\|)>>\\|)>> at random and we interpolate *x,\,\*x|)>/>. This random dilatation further increases the cost of each blackbox evaluation by operations in >. We need >> evaluation points, together with >> further operations in >, for the sparse interpolation of />. Finally, we recover the requested gcd of and from ,\,x|)>/=*x,\,\*x,1|)>/>, using > extra operations in>. Let us now focus on the case when and are presented as sparse polynomials instead of blackbox functions (more generally, one may consider the model from5.3>). The number of terms of (resp. of ) is written > (resp. >). The above method has the disadvantage that we need to evaluate and at many shifted points that are generally not () in a geometric progression. Such evaluations are typically more expensive, when using a general algorithm for multi-point evaluation, so it is better to avoid shifting altogether in this case. Now consider the particular case when <\equation*> G*u,\,x*u|)>=G,\,x|)>*u+\+G,\,x|)>*u and > contains a single term >>. Then we may use an alternative normalization where we divide by >> instead of >, and multiply by *\*x|)>> to keep things polynomial. For a random point ,\,a|)>\\|)>>, we may check whether we are in this case (with high probability) by testing whether <\equation*> G,\,a|)>*G,\,a|)>=G,\,a|)>. Once *\*x|)>*G*u,\,x*u|)>/>|)>> has been interpolated, we recover as <\equation*> G=\,\|)>-\>*H, where >> stands for the gcd of all monomials occurring in and similarly for > and>. With these modifications, the complexity bound of Theorem becomes <\equation*> D|)>|)>**>+>+O. The same approach can be used if > contains a single term >>. In the case when > contains several terms, we consider <\equation*> ,\,x|)>\G>*u,\,x>*u|)>=>,\,x|)>*u>+\+>,\,x|)>*u> for a suitable vector =,\,w|)>\\0>> of weights. Whenever ,\,x|)>> contains asingle term, we may use the above method to compute > and then . However, the degree> of > is potentially |\|>> times larger than the degree of , where |\|>=max,\,w|)>>. This leads to the question how small we can take |\|>> while ensuring that >> contains asingle term with probability at least >. We conjecture that this is the case when taking > at random with|\|>\2*d>. In practice, the idea is to try a sequence of random weights > for which |\|>> follows a slow geometric progression. Let us provide some partial evidence for our conjecture. In dimension , consider the Newton polygon at > of a polynomial of degree made of the maximal number of slopes with k>. For each integer 0>, let> be the number of slopes with ,k|}>>, =1>, and =k>. We have <\equation*> N+2*N+\+d*N\2*d. Under this constraint, the sum +\+N> is maximized by considering Newton polygons that first exhaust all slopes for which > is minimal. Now \2*k-1> for all . Taking minimal with 3+\+m*\2*d>, it follows that <\equation*> N+\+N\1+3+\+=m. Now 3+\+m*\*m>, so > and <\equation*> N+\+N\. On the other hand, for any 0>, the number of rational numbers \> with ,k|}>>>> is at least <\equation*> k-||\>-||\>-\-||\>\|6>|)>*k\|3>. Now let |*+1|\>> be the smallest integer with /3\2*>. For arandom weight ,w|)>\,k|}>> with ,w|)>=1>, we then conclude that >> has aunique term with probability at least >. As a second piece of evidence, let us show that there always a weight vector \,d|}>> for which >> contains a unique term. Let > be an element of the support \\\G>\0|}>> of for which +\+\>> is maximal. Then is included in the ball with center > and radius , so the tangent space to this ball at > intersects only at >. Taking \\> to be the normal vector to this tangent space, it follows that >=c*x>*\*x>> for some \\>. <\bibliography|bib|tm-plain|> <\bib-list|18> A.Arnold, M.GiesbrechtD.S.Roche. Faster sparse multivariate polynomial interpolation of straight-line programs. Symbolic Comput.>, 75:4\U24, 2016. A.CuytW.-S.Lee. Sparse interpolation of multivariate rational functions. , 412:1445\U1456, 2011. J.deKleine, M.MonaganA.Wittkopf. Algorithms for the non-monic case of the sparse modular gcd algorithm. , ISSAC '05, 124\U131. New York, NY, USA, 2005. ACM. S.GargÉ.Schost. Interpolation of polynomials given by straight-line programs. , 410(27-29):2659\U2662, 2009. J.vonzur GathenJ.Gerhard. . Cambridge University Press, New York, 3rd, 2013. D.Grigoriev, M.KarpinskiM.F.Singer. Computational complexity of sparse rational interpolation. Comput.>, 23(1):1\U11, 1994. J.vander Hoeven. Probably faster multiplication of sparse polynomials. , HAL, 2020. . J.vander Hoeven. . Scypress, 2020. J.vander HoevenG.Lecerf. Sparse polynomial interpolation in practice. , 48(3/4):187\U191, 2015. J.vander HoevenG.Lecerf. Sparse polynomial interpolation. Exploring fast heuristic algorithms over finite fields. , HAL, 2019. . J.HuM.Monagan. A fast parallel sparse polynomial GCD algorithm. Symbolic Comput.>, 2020. >. Q.-L.HuangX.-S.Gao. Sparse rational function interpolation with finitely many values for the coefficients. J.Blömer, I.S.Kotsireas, T.KutsiaD.E.Simos, , 10693Theoretical Computer Science and General Issues, 227\U242. Cham, 2017. Springer International Publishing. M.JavadiM.Monagan. Parallel sparse polynomial interpolation over finite fields. M.Moreno MazaJ.-L.Roch, , 160\U168. New York, NY, USA, 2010. ACM. E.KaltofenB.M.Trager. Computing with polynomials given by black boxes for their evaluations: greatest common divisors, factorization, separation of numerators and denominators. Symbolic Comput.>, 9(3):301\U320, 1990. S.Lang. , 211. Springer-Verlag, New York, 3rd, 2002. G.Lecerf. On the complexity of the Lickteig\URoy subresultant algorithm. Symbolic Comput.>, 92:243\U268, 2019. D.S.Roche. What can (and can't) we do with sparse polynomials? C.Arreche, , 25\U30. New York, NY, USA, 2018. ACM. R.Zippel. Probabilistic algorithms for sparse polynomials. EdwardW.Ng, , 72, 216\U226. Springer Berlin Heidelberg, 1979. <\initial> <\collection> <\attachments> <\collection> <\associate|bib-biblio> <\db-entry|+if2lDqI1qmMqAaI|inproceedings|CKL89> <|db-entry> E. Y. > <\db-entry|+2X2sGxMCqQBxuWk|inproceedings|BoLeSc:2003:tellegen> <|db-entry> G. É. > <\db-entry|+1LCfUIVm228oQhYU|inproceedings|GR11> <|db-entry> D. S. > <\associate|bib-bibliography> <\db-entry|+1KzWHcjbesV3Oio|misc|TeXmacs:website> <|db-entry> > > <\db-entry|+EOnwzCwPKLv0R5|article|ArGiRo2016> <|db-entry> M. D. S. > Symbolic Comput.> <\db-entry|+8lDHqURSvmeWym|article|GargSchost2009> <|db-entry> É. > <\db-entry|+8lDHqURSvmeXA0|article|vdH:spinpol> <|db-entry> G. > <\db-entry|+2bPrDBsrH7rutLc|techreport|vdH:ffsparse> <|db-entry> G. > > <\db-entry|+1eVycL2722DWxgTa|inproceedings|JaMo10> <|db-entry> M. > J.-L. > <\db-entry|+1eVycL2722DWxgTc|inproceedings|Roche2018> <|db-entry> > > <\db-entry|+2AIx1vGQp5UiF2E|article|CL11> <|db-entry> W.-S. > <\db-entry|+jZzyy9fWllYOxp|book|GathenGerhard2013> <|db-entry> J. > <\db-entry|+2DPRky2cs1xL3PV|article|KaltofenTrager1990> <|db-entry> B. M. > Symbolic Comput.> <\db-entry|+21Nh7qQ777Ycic5|article|GrigorievKarpinskiSinger1994> <|db-entry> M. M. F. > Comput.> <\db-entry|+2AIx1vGQp5UiF2J|inproceedings|HuangGao2017-macis> <|db-entry> X.-S. > I. S. T. D. E. > <\db-entry|+1eVycL2722DWxgTb|book|Lang2002> <|db-entry> > <\db-entry|+21Nh7qQ777YcibY|inproceedings|KleineMonaganWittkopf2005> <|db-entry> M. A. > <\db-entry|+14fnuNq76Pww8IS|inproceedings|Zippel1979> <|db-entry> > > <\db-entry|+21Nh7qQ777Ycic4|article|HuMonagan2020> <|db-entry> M. > Symbolic Comput.> >> <\db-entry|+TCcKHzBZDj8pPw|article|Lecerf2019> <|db-entry> > Symbolic Comput.> <\db-entry|+ELmWuIuYRIxibV|techreport|vdH:smul> <|db-entry> > > <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> TeXmacs:vdH:book ArGiRo2016 GargSchost2009 vdH:spinpol vdH:ffsparse JaMo10 Roche2018 CL11 vdH:ffsparse GathenGerhard2013 GathenGerhard2013 GathenGerhard2013 KaltofenTrager1990 GathenGerhard2013 GrigorievKarpinskiSinger1994 KaltofenTrager1990 CL11 HuangGao2017-macis CL11 CL11 CL11 CL11 CL11 CL11 GathenGerhard2013 Lang2002 GathenGerhard2013 KleineMonaganWittkopf2005 Zippel1979 KleineMonaganWittkopf2005 KaltofenTrager1990 HuMonagan2020 GathenGerhard2013 Lang2002 GathenGerhard2013 Lecerf2019 vdH:smul <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Preliminaries> |.>>>>|> |2.1.On the complexity of sparse polynomial interpolation |.>>>>|> > |2.2.Genericity and random dilatations |.>>>>|> > |math-font-series||font-shape||3.Rational functions> |.>>>>|> |3.1.Related work |.>>>>|> > |3.2.The algorithm by Cuyt and Lee |.>>>>|> > |3.3.A variant based on projective coordinates |.>>>>|> > |math-font-series||font-shape||4.Sparse gcds> |.>>>>|> |4.1.Related work |.>>>>|> > |4.2.A variant based on projective coordinates |.>>>>|> > |4.3.Another heuristic approach |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|> <\links> <\collection> > > > >