> <\body> <\hide-preamble> >>> \; |<\doc-note> This paper has partially been supported by the Fields Institute of Mathematical Sciences at Toronto. ||<\author-address> de Mathématiques ( 425) CNRS, Université Paris-Sud 91405 Orsay Cedex France >|> <\abstract> In this paper, we describe an algorithm for the ``uniformization'' of a multivariate power series. Let >> be the field of ``grid-based power series'' over a sufficiently large non archimedean ``monomial group'' (or value group) >, such as ={t>*\*t>:\,\,\\\}> with the lexicographical ordering on ,\,\>. We interpret power series \[[x,\,x]]> as functions >>\\>>>. On certain ``regions'' > of the space >>>, it may happen that the valuation of can be read off from the valuations of the >. In that case, is said to be ``uniform'' on >. We will describe an algorithm for cutting >>> into a finite number of regions, each on which is uniform for a suitable change of coordinates, which preserves the elimination ordering on ,\,x>. The algorithm can probably be seen as an effective counterpart of local uniformization in the sense of Zariski, even though this connection remains to be established in detail. >>>>> Let > be a field and |^>> an extension field of >. Given a polynomial [x,\,x]>>, anatural question is to study the behaviour of as a function |^>\|^>>. In order to capture all relevant properties of, it is convenient to assume that |^>> is algebraically closed, or at least contains the algebraic closure > of >. Using quantifier elimination, we may always cut |^>> into a finite number of ``regions'' (which are constructible subsets of |^>> in this case), each on which has a ``uniform behaviour''. For instance, outside acertain Zariski closed singular locus, the solutions to ,\,x)=0> are given by a finite number of ramified non singular algebraic functions (x,\,x)>. A similar challenge can be stated for many other theories. For instance, if > is a real field, then we may also want to study the real geometric properties of the function |^>\|^>>, such as those regions where is positive. In this case, |^>> is rather taken to be the real closure of > and cylindrical decomposition is a typical technique for studying the behaviour of the function . In this paper, we will consider a power series \[[x,\,x]]> over a field of characteristic zero, instead of a polynomial. We will see that can again be regarded as a function |^>|^>>>\|^>|^>>>> on a suitable space of grid-based series over a sufficiently large, non archimedean monomial group |^>>. At a first approximation, elements in the field |^>|^>>> might be taken to be series in ,\,t> with real exponents, and with a lexicographical ordering on ,\,t>. Series in |^>|^>>>> and |^>|^>>>> correspond to infinitesimal bounded series in |^>|^>>>. We refer to section for more precise definitions. A suitable language for the study of power series functions |^>|^>>>\|^>|^>>>> is the usual language of fields, extended with an asymptotic neglection relation >>. In this language, \\> should be interpreted as =o(\)>, \\> should be interpreted as =O(\)> and \\> corresponds to the case when \\\\>. If > and > lie in a valued field, such as |^>|^>>>, then we also have \\\v(\)\v(\)>, \\\v(\)\v(\)> and \\\v(\)=v(\)>. Even though [[x,\,x]]> is not a valued field, it does come with apartial neglection relation >. Furthermore, a relation such as \x> naturally corresponds to a subset ,\)\|^>|^>>>:\\\}> of |^>|^>>>>. From the asymptotic point of view, the simplest behaviour of a series \[[x,\,x]]>> on a subset > of |^>|^>>>> is when ,\,))> only depends on ),\,v()> for points ,\,)\\>. If =|^>|^>>>>, this is the case if and only if is a series in the sense that its support x>=x>*\*x>> admits a unique >>-maximal element >=x>*\*x>>, called the of . In other words, we may write >*g>, where is a unit in [[x,\,x]]>. The series +x\\[[x,x]]> also satisfies ,))=v()>> on the region where \x>. Again, it is possible to regard as auniform series, but in a suitable ring [[x/x,x]]> of , which corresponds precisely to the coordinate ring for the region on which \x>. Given an arbitrary series \[[x,\,x]]>, our main objective is to present an algorithm which decomposes the space |^>|^>>>> into a finite number of well-described regions >, endowed with suitable local conical coordinates, such that becomes a uniform conical series on each of these regions, with respect to the local coordinates. Moreover, the necessary changes of coordinates will respect the elimination order on ,\,x>. More precisely, if ,\,> are the local coordinates on any of the regions >, then > only depends on ,\,x> for each . In particular, at the end of the algorithm, it will be possible to read off the solutions (x,\,x)> to the equation ,\,x)=0> from the answer. Resolution of singularities, which has recently been made effective, is one means to solve our problem. However, the resolution process has to be carefully adapted in order to preserve the elimination ordering, which is non trivial. Moreover, resolution of singularities is really an overkill for our problem: for many practical applications, it is not necessary to glue the final regions together using birational maps. What is worse, insisting on global desingularization can be expected to artificially blow up the complexity, since the size of a description of the final non singular variety is usually huge. In fact, our objective is closer to local uniformization in the sense of Zariski. In the future, we hope to provide a dictionary which will prove both approaches to be equivalent up to preservation of the elimination order. As we will see, one major advantage of our approach is that the general case is essentially reduced to the Newton-Puiseux method in dimension two. An earlier version of our approach was first described in10>. Apart from some minor changes (notational, terminological and the fact that we will not assume> to be ordered), several improvements were made. First of all, section contains a more geometric description of the uniform Newton degree, thereby simplifying our previous treatment of parallel descent of the Newton degree. We also corrected some errors concerning the use of so called pseudo-coefficients (see section). Finally, we adapted the method to series with coefficients in a >-vector space > instead of >, where > is typically of the form =\[[x>,\,x>]]> for \\\i>. Although this last change is not mandatory, it simplifies the overall treatment and also prepares for the uniformization of local vector fields. Let us briefly outline the contents of the paper. In section, we introduce conical varieties and their function rings, whose elements are conical series. In order to deal with regions where \x>*\*x>> for certain ,\,\>, we will rewrite <\equation*> x=x>*\*x>*(\+) for a new parameter > over > and \1>. For this reason, the coefficients of our conical series will always live in an algebraic extension >> of > with a finite number of parameters. It is also possible, but less convenient, to directly work with serial coordinates > which are either infinitesimal \1> or bounded \1>. In section, we introduce refinements, which correspond to injective conical morphisms between conical varieties of the same dimension. In section, we recall some general techniques for machine computations with conical series. First of all, we present a technique for computations with series with respect to changing coordinate systems. We next recall the formalism of non deterministic algorithms, which is suitable for modeling the situation in which a region has to be cut into several pieces. Finally, we briefly recall the notion of a local community. This concept aims at modeling certain subclasses of conical series, which are suitable for computations on a concrete computer. Of course, general power series are not even representable, since they may involve infinitely many coefficients. All effective computations assume that > is an . This means that there are algorithms for performing the field operations and the equality test in >. In section, we turn to algorithms for the uniformization of a conical series. We first recall the classical Newton polygon method in two dimensions, but for series whose coefficients lie in a more general field |^>|^>>> of generalized power series. We next consider uniform versions of this method, using evaluations of all but one variable. In particular, we will define an important invariant, the uniform Newton degree, which will strictly decrease every two refinements. We finally present our uniformization algorithm. Since auniform series remains uniform under refinements, the algorithm can also be used for the simultaneous uniformization of several series. Uniformization is a key ingredient for many computations with multivariate power series. Clearly, we need it in order to describe the asymptotic behaviour of such series. We typically also need it for the inversion of a power series, for expressing the zeros of ,\,x)>> as a function (x,\,x)> in the remaining variables, and for many other problems which can be stated in the first order theory of fields with a neglection relation>>. For instance, given two series \[[x,x,x]]>, it allows us to determine the region of all > for which g>. In this section, we start by recalling some terminology and basic facts from 2>. Let > be a field of characteristic zero and > a , that is, a commutative multiplicative monoid with a compatible partial ordering >>. In this paper, we will always assume that > is torsion free and alattice for the ordering >>. A subset \\> is said to be > if <\equation*> \\\>*\*\>*\, for certain monomials ,\,\,\\\> with ,\,\\1>. Given a formal sum \\>f>*\> with >\\>, we call <\equation*> supp f={\\\:f>\0} the of . We say that is a series if is grid-based. We denote by >> the set of all grid-based series. Let \>>. The finite set > of >>-maximal elements in is called the set of of. If > is a singleton, then is said to be . In that case, the unique element > of> is called the of and the corresponding coefficient =f>> the of . We say that is (and write 1>) if \1> for all \supp f>. Similarly, is said to be (and we write 1>) if \1> for all \supp f>. We write 1> if is uniform and \\1>; this is the case if and only if > with \>> and \1>. <\proposition> The set >> forms a >-algebra, whose units are those uniform elements whose dominant monomials are invertible. The above definitions and results generalize to the case when the coefficient field > is replaced by a>-vector space >. In fact, this may even be seen as a special case, when regarding > as a subset of the quotient field of the tensor algebra of >. When working with coefficients in >, proposition becomes: <\proposition> The set >> forms a >>-module. A possibly infinite family )I>\\>> is said to be if I>supp f> is grid-based and I:\\supp f}> is finite for every \\>. In that case, the sum I>f> with >=I>f>> is a well-defined grid-based series in >>. It is possible to redevelop basic algebraic notions for this so called 2 and 6>. For instance, a strongly linear map >\\>> is a linear map which preserves infinite summation in a suitable manner. Similarly, a morphism :[[x,\,x]]>\\[[,\,>]]> of strong algebras corresponds to the operator which substitutes (x)> for each >. Let |^>> be an algebraic extension of > which contains the algebraic closure > of>. A > is determined by a finite number of parameters ,\,\>>, subject to a finite number of polynomial constraints and one polynomial inequality over>: <\eqnarray*> (\,\,\)>||(i=1,\,l)>>|,\,\)>|>|>>> We denote by >\|^>> the corresponding of all points |^>=(|^>,\,|^>)>> which satisfy these constraints. We also denote by <\equation*> \>=\[\,\,\,\>]/(P,\,P,\>*Q-1). the corresponding . Given a >-vector space >, we let >=\\\>>. If > is an effective field, then it is classical that the consistency of a finite system of polynomial constraints can be checked by algorithm, using Groebner basis techniques for instance. It is classical that any point |^>\\>> induces a unique morphism <\equation*> \:\>\|^> with (\)=|^>> for all , and . Given \>>, we will then write |^>)=\(f)>. Given a second system of parametric coordinates |~>>, a morphism <\equation*> \:\>\\|^>> induces a morphism >:\|~>>\\>> of parametric varieties by asking that >(|~>|^>))=\(f)(|~>|^>)> for all \>>. If >> is injective, then |~>\|\>=\>(\|~>>)> will be called a of >>. A grid-based series with parameters in > is simply a grid-based series in >>> (or in >>>). Given a point |^>\\>>, we let |^>)\|^>>> (or |^>)\|^>>> with |^>=\\|^>>) be its evaluation at this point. We say that \>>> is if admits a unique dominant monomial and an invertible dominant coefficient. This is the case if and only if |^>)> is uniform for all |^>\\>>, since |^>\\>. More generally, we therefore say that \>> is if and only if |^>)> is uniform for all |^>\\>>. We say that \>> is >-uniform if there exists a finite set \\> such that |^>)>=\> for all |^>\\>>. <\proposition> Let \>>>. Then >> can be decomposed <\equation*> \>=\\|\>\\\\\|\> into a finite number of subregions, such that for each {1,\,r}>, there exists a set >, such that for each |^>\\>>, we have |^>)>=\>. <\proof> Let > be the set of monomials \supp f>, such that > is a dominant monomial of |^>)> for some |^>\\>>. Assume for contradiction that > is infinite. Since > is grid-based, there exists an infinite decreasing sequence \\\\> of elements in >. Since >> is Noetherian, the ideal generated by >,f>,\> is generated by >,\,f>> for some . Now choose |^>\\>> such that > is a dominant monomial of |^>)>. Then >(|^>)=\=f>(|^>)=0>, whence >(|^>)=0>, a contradiction. This shows that > and |^>)>:|^>\\>}\(\)> are finite. Given \(\)>, the set |^>\\>:\|^>)>=\}> is determined by the polynomial equations >(|^>)=0> for all > such that \\> for all \\>, and one polynomial inequation\\>f>(|^>)\0>. A consists of a finite number of serial coordinates ,\,x>, subject to a finite number of asymptotic constraints <\equation*> x>=x>*\*x>\1(\\\,i=1,\,c). These constraints can be encoded by the finite set ={x>,\,x>}>, which is said to be if <\equation*> \k,\,k\\,k*\+\+k*\=0\k=\=k=0. In that case, >*\*x>> is a monomial group for the partial ordering given by <\equation*> x>\x>\(\k,\,k\\>,\-\=k*\+\+k*\). The set > generates a monomial submonoid <\equation*> \=\>=x*\+\+\*\>, which is said to be . Series in >>, >>, >>> and >>> are also said to be . For what follows, it will be convenient to always assume that <\equation*> \\\\{x,\,x}. The theory of linear programming provides us with efficient algorithms for testing consistency and whether agiven constraint >\x>> is implied by the constraints in >. A system > of consists of the combination of a system >> of parametric coordinates and a system >> of serial coordinates. In what follows, we will shortly call such an > a . We will denote the corresponding parameters by ,1>,\,\,l>>>, the serial coordinates by ,1>,\,x,n>>>, the asymptotic constraints by>>, and >=\>>>. If > is clear from the context, then we will drop all subscripts >. When working with respect to a coordinate system |~>> or >, we will again drop subscripts and rather use tildas or primes for distinguishing from >. For instance, the serial coordinates with respect to|~>> will be denoted by ,\,>>. The of > is given by <\equation*> \>=\>>. Let |^>> be a totally ordered monomial group such that any conical monomial group > can be embedded in |^>>. Given any infinite dimensional totally ordered vector space over>, one may take |^>> to be the multiplicative monomial group > isomorphic to . A morphism of strong algebras <\equation*> \:\>\|^>|^>> with (\>)\|^>> is entirely determined by the -tuple |^>=\(\)=(\(\),\,\(\))\|^>> and the -tuple =\(x)=(\(x),\,\(x))\|^>|^>>>. The pair =(|^>,)> is called a and the set>> of all such points the associated to >. As before, we will write )=\(f)> for all \>>. Let {1,\,n}> and let > the coordinate system obtained from > by removing > and keeping the same parameters (see remark below), so that *x>\\>. A series \>> is said to be in > if \*x>>. Then may also be regarded as a series in>[[x]]> or as a series in [[x]]>>. If > is the only element in > which depends on> (so that =\*x>>), then > is called an of>, and all series in >> are ordinary in >. <\remark> A more precise construction of the coordinate system > goes as follows. We take =\>. The serial coordinates of > are ,\,x,x,\,x>. We finally enforce <\equation*> \=\\x>*\*x>*x>*\*x>, by taking > to be the subset of monomials \\> which admit only trivial factorizations in>. This minimal set > of generators is finite and can be computed as follows. Using linear programming, we first compute a minimal set of generators ,\,\> for the cone)>>>. For each , the minimal\\>> with >\x>> yields a minimal generator =\>> for >. Then \\\{\>*\*\>:\\\,0\\\1}\x>>, so we conclude by removing all elements which can be factored from >. Consider two coordinate systems > and |~>>. A is a morphism of strong algebras <\equation*> \:\>\\|~>>, with (\>)\\|~>>>. Notice that the image of a uniform series under > is again uniform as soon as (\)\0> and zero otherwise. The morphism > induces a mapping <\equation*> \>:\|~>>\\> on the associated varieties by <\equation*> f(\>(|~>))=\(f)(|~>) for all \>> and |~>\\|~>>>. If >> is injective, then > will be called an . If >> is injective and the dimensions and > of > and |~>> coincide, then > is called a and <\equation*> \|~>\|\>=\>(\|~>>) a of >>. A refinement > is said to be , if (x)> only depends on ,\,>> and (x)> is ordinary in >, for ,n>. A -refinement> is a triangular refinement >, such that (x)=> for ,n>. A refinement > is said to be in> if > is triangular and (f)> is ordinary in > for all \>>. A -refinement is said to be strict if it is strict in >. The composition of two triangular refinements is again triangular, the composition of two >-refinements is again a-refinement, and the composition of a refinement which is strict in > and a triangular refinement (or a triangular refinement and a refinement which is strict in >) is again strict in >. We will sometimes say that the coordinate > , when applying a refinement which is strict in >. <\example> Consider a conical morphism :\>\\|~>>> with |~>=\> and (x)=x> for all. Then > is entirely determined by its restriction >> to >>. In particular, > is arefinement if and only if |~>>> consists of fractions of elements in >(\>)>. Such refinements correspond to the imposition of constraints on the parameters. <\example> Consider a conical morphism :\>\\|~>>> such that >=Id>, =n> and (x)=> for all . Then |~>\\> and > is an -refinement, which corresponds to the imposition of new asymptotic constraints on serial coordinates. <\example> The p,\,p>:\>\\|~>>> of orders,\,p> is the -refinement defined by >=Id> and <\equation*> Rp,\,p>(x)=>(i=1,\,n). More precisely, |~>> consists of |~>={,\,}>, together with aconstraint *\>*\**\>> for every constraint >\\\\>. <\example> Given a coordinate system >, we will denote by > the -dimensional coordinate system formed by the parameters and the first serial coordinates ,\,x> of>. We denote by > the corresponding conical monomial group. Given a new invertible parameter|~>> and an infinitesimal monomial \1> in > which is incomparable to >, let us show that the asymptotic change of variables <\equation> x\\*(|~>+)(\1) gives rise to a strict -refinement. Indeed, the parametric coordinate ring of the new coordinate system |~>> is given by|~>>=\>[|~>,|~>1>]>. The new serial coordinates are =x> for k> and a new coordinate >. The monomial group |~>> is generated by > and the monomials >*(\/x)>> with >\\>. We take (x)=x> for k> and (x)=\*(|~>+)>. Given apoint =\>(|~>)\\|~>\|\>>, we have <\eqnarray*> |~>(|~>)>||(\)/\(\)>>>|(|~>)>||(\)/\(\)-c(\)/\(\)>,>>>> so > is indeed a -refinement. Since > is an ordinary coordinate of |~>>, the refinement is strict in >. <\example> Let \\>> be a uniform series with an invertible dominant coefficient and infinitesimal dominant monomial \\>. If > is incomparable to >, then it can be shown as above that the asymptotic change of coordinates <\equation> x\\+\*(\1) gives rise to a strict -refinement >. One technical difficulty concerning the upcoming algorithms is that we frequently have to change our coordinates. From a notational point of view, it would be cumbersome to explicitly rewrite all our objects with respect to the new coordinates after every change. Instead, we will rather assume that our coordinate system with the corresponding constraints is stored in aglobal variable and that our objects are automatically rewritten with respect to the most recent coordinate system when necessary. More precisely, starting with the coordinate ring >>, the execution of an algorithm up to a given ``current execution point'', gives rise to a sequence of refinements: <\eqnarray*> :\>>|>|>>>|:\>>|>|>>>||>|>|:\>>|>|>>>>> The ``current coordinates'' \\> at that point are encoded by the finite sets of parameters and serial coordinates, together with the constraints imposed upon them. Now consider a series \>> introduced between the -th and the -th refinement. We will encode such a series by the pair ,f)> with =\>. Whenever an operation needs to be performed on at the current execution point, we automatically replace this encoding by >,)>, where >=\> and =(\\\\\)(f)>. Similarly, amonomial \\>> will be encoded by the pair >,\)>, where >=\>. When necessary, this encoding will be replaced by |~>>,|~>)>, where |~>>=\> and |~>> is the dominant monomial of \\\\)(\)>; this will ensure that monomials remain monomials of the same asymptotic magnitude at any stage, even though their values as series may change. Sometimes, we will also work with a system > of ``subcoordinates'' of the current coordinate system >. This means that the serial coordinates of > are a subset >,\,x>>}> of ,\,x}>, with the constraints induced from >. The parametric coordinates of > and > are assumed to be identical. When working with respect to such a system > of subcoordinates, any change of coordinates for > lifts to a change of coordinates for >. In particular, all changes of coordinates can always be performed on>, even when we need to work with respect to subcoordinates. <\remark> Various variants of the above strategy are possible. For instance, whenever an operation on two series takes place, then we may rewrite them with respect to the simplest common coordinate system. Instead of using a global variable for the current coordinate system, operations on the coordinate system (such as the imposition of constraints) may also be done on the series which are intended to use the new coordinates, thereby allowing for a more functional programming style. Another frequently occurring difficulty is that certain relations may not be satisfied uniformly on the current region >>. For instance, a polynomial relation )=0> for the parameters may be satisfied on a certain subregion, whereas the opposite relation )\0> is satisfied on another subregion. If, at a certain point during the execution of an algorithm, we need to decide whether )=0> or )\0>, we may thus have to decompose the current region into these two subregions and continue the execution on each of them. A classical computer science approach to this situation is to allow for so called algorithms. This non deterministic setting features an additional programming construct called , which consists of selecting the way to continue the algorithm non deterministically, among a finite number of cases. For instance, when testing whether )> vanishes, one branch would correspond to the subregion of > for which )=0> and the other branch would correspond to the subregion of > for which )\0>. It is classical that a non deterministic computation can be represented by a tree: each inner node > of the tree corresponds to a non deterministic choice in the algorithm and the children ,\,\> of the node to the possible continuations of the algorithm. König's lemma implies that this computation tree is finite if and only it contains no infinite chains. In other words, if the non deterministic algorithm terminates for all possible sequences of choices, then we obtain a deterministic algorithm by running the non deterministic algorithm several times, for each possible sequence of choices. In our setting, each node of the computation tree also corresponds to a coordinate system >> and the case separation induces apartition <\equation*> \>>=\>\|\>>\\\\>\|\>>. In particular, the root of the tree corresponds to the original coordinates > and the leafs of the tree correspond to the final coordinates ,\,\> for which the algorithm provides uniform results. At the end, it will then be guaranteed that <\equation*> \>=\\|\>\\\\\|\>. Throughout our algorithms, we will assume that branches which correspond to empty regions are automatically eliminated. In other words, a non deterministic process is killed as soon as contradictory constraints are imposed. <\remark> The approach of non deterministic algorithms is known under various other names, depending on the area. In computer algebra, it is sometimes referred to as . In basic programming languages, non deterministic algorithms are best implemented simply by rerunning the algorithm several times from its start and exhausting all sequences of non deterministic choices. In high order programming which support so called , this rerunning can be avoided. Assume now that > is an effective field. Since power series in [[x]]> and more general grid-based power series in >> may contain infinitely many coefficients, we need to restrict our attention to suitable subclasses of series in order to compute with them. In this section, we recall the concept of a ``local community'', which axiomatizes such computationally interesting classes of series. In fact, it also models other interesting classes of series, such as the class of convergent multivariate power series over >. We will only recall the main definitions and facts about local communities and similarly for Cartesian representations in the next subsection. More details can be found in 3.4 and 3.5> and. A over an effective >-algebra > is a family =(\)\>> of >-subalgebras \\[[z,\,z]]>, which satisfies the following properties: <\description> For all k\n>, we have \\>. Given 1> and \>, such that is divisible by >, we have \\>. Given a strong algebra morphism :\[[z,\,z]]\\[[,\,>]]> with (z),\,\(z)\\>>, we have (\)\\>>. Given 1> and \>, such that *\*z>=0> and *\*z*z>=1>, let be the unique power series \[[z,\,z]]> such that the substitution of for > in vanishes. Then \>. Given \> and n>, we notice that <\equation*> f|\ z> (z,\,z)=lim\0> ,\,z+z,\,z)-f(z,\,z)|z>\\, since computing the limit for \0> reduces to substituting > by zero. In particular, when expanding as aseries in >, then each of its coefficients is again in >. <\example> Let > be the set of all algebraic power series in ,\,z> over > for each. Then =(\)\>> is a local community over >, which is actually the smallest local community over >. <\example> Assuming =\> or =\>, let > be the set of all convergent power series in ,\,z> over >. Then =(\)\>> is a local community. The local community > is said to be if there are algorithms for carrying out the >-algebra operations in>, for performing the division by > in , for the substitution> in , and for computing as a function of in . For instance, the local community of all algebraic power series over > is effective. Given a local community > over > and a system > of parametric coordinates over>, we denote by >> the smallest local community over >> which contains >. Given another system |~>> of parametric coordinates, any morphism :\>\\|~>>> induces a natural componentwise morphism >:\>\\|~>>>. We say that > is if >> is effective for each system of parametric coordinates over > and if >> is computable whenever :\>\\|~>>> is computable. The local community of all algebraic power series over > is parametrically effective. <\remark> It can be shown that > is parametrically effective as soon as > is effective. The idea is first to represent elements in >> by algebraic series in [[z,\,z]]> after substitutions \c+z> for a suitable point \>> and transcendence basis ,\,\> of>>. Then elements in >\\> can be regarded as elements in >. A in ,\,z> over > is an element \((z,\,z))\\>>> with the componentwise partial ordering on >>. We may always rewrite >*g> with \\> and \[[z,\,z]]>>. Let > be an arbitrary monomial monoid and consider a grid-based series \>>. A for is a series >\\((z,\,z))> with >|^>> for some strong algebra morphism :\((z,\,z))\\>> with \\>> for all . A family >)I>> of Cartesian representations is said to be if the strong morphism > is the same for all its components >>. It can be shown that any finite family of grid-based series admits afamily of compatible Cartesian representations. Let > be a local community over >. A Cartesian representation >\z>*\> of is called an >-representation> of . The set >>> of all series with an >-representation is clearly an >-subalgebra of >>. A Cartesian representation >> of is said to be if for every dominant monomial |\>> of >>, there exists a dominant monomial > of with |\>\\>. Any \>>> can be shown to admit afaithful >-representation. Every uniform \>>> also admits a uniform >-representation. Moreover, if > is effective, then there are algorithms for computing faithful and uniform >-representations. The above definitions and properties extend to the case when we take our coefficients in a strong >-module > of the form =\[[t,\,t]]>. In that case, a Cartesian representation >\\((z,\,z))> can be rewritten as an -tuple of series <\equation*> >=(>,\,>)\z>*\[[z,\,z]][[t,\,t]]. Under the identifications =z>, we then say that >> is an >-representation if >\z>*\> for all . The corresponding subspace >>> of >> is an >>>-module and the results about faithful and uniform representations extend. Given a faithful Cartesian representation >> of a conical series \,\>>, the dominant monomials of can be read off from the dominant monomials of >>. In particular, if > is effective, then we have an algorithm for the computation of >. In general, is not >-uniform>, so case separations may be necessary in order to enforce this property. As long as is not uniform, it suffices to pick a non uniform dominant coefficient of , distinguish the two cases when and 0>, and keep computing the dominant monomials of in both cases. In a similar way as in proposition, it can be shown that this algorithm terminates. Given a finite subset \\> of monomials, the associated is the subset> of all monomials \\> for which there exists a strong morphism :\>\|^>|^>>> such that (\)\\(\)> for all \\>. The computation of > as a function of > is classical and corresponds to the computation of a convex hull. If =\>, then we call =\> the of . If is >-uniform, then we call > the >-uniform Newton polytope of. By what precedes, and modulo case separations, we have an algorithm for the computation of the >-uniform Newton polytope of . In the sequel, this algorithm will be called >>. Let us consider a conical series in ,\,x> which admits an >-representation >> with respect to a fixed local community >. Given ,\,\\\>, it is natural to ask for an >-representation of the coefficient >*\*x>] f> of >*\*x>> in . Unfortunately, such an >-representation does not always exist. For instance, assume that admits an >-representation of the form <\equation*> >=,\\\>>,\>*z>*z>, where \x>, =x/x> and =x>. Then the coefficient ] f> admits a Cartesian representation <\equation> >=\\>>,\>*z>*z>. However, there is no reason for this diagonal to be in >. A local community > is said to be a if it satisfies <\description> For any =(\,\,\)\\>, the set > is closed under taking diagonals <\equation*> >\\> >=\\>>|\\=0>>>>>>f>*z>*\*z>. The ring of conical series with >-representations in a diagonal community is closed under the extraction of coefficients with respect to ,\,x>. Many classical local communities, such as the communities of algebraic or convergent power series, are actually diagonal. However, local communities are not always diagonal, and even if they are, then proving this fact may be non trivial. Fortunately, for our purpose of uniformization, we can do with a suitable approximate version of the extraction of coefficients with respect to ,\,x>: given ,\,\\\>, we say that >> is a of >*\*x>> in , if the set of dominant monomials of >*x>*\*x>> contains no monomial of the form >> with =\,\,\=\>. In , we have given an algorithm for the computation of such a pseudo-coefficient >>. <\remark> The main idea behind the computation of >> is to hack the algorithm for the computation of a faithful >-representation of by replacing all zero tests by so called diagonal tests. This idea is based on the observation that, even though we cannot compute>> in(), we check whether >->=0>: <\equation*> >=>\>(z*z,z*z)=>(z,z). In general, denoting by the genuine coefficient of >*\*x>> in , a similar trick may be used to test whether>*\*x>> . Using such diagonal tests for and truncations of , it then becomes possible to compute the set of dominant monomials> of >*\*x>>. The truncation >=f\>> yields the desired pseudo-coefficient. In this section, we will assume that we have fixed a local community > and that all conical series to be considered admit >-representations. In particular, the set>> will correspond to >>>> instead of >>>. All our algorithms on conical series will only rely on operations that can be performed from within the local community >. Therefore, if> is parametrically effective, then our algorithms also become fully effective. The only >-vector> spaces > over which we will work are strong vector spaces of the form =\[[t,\,t]]>. If ,\,\> is the canonical basis of such a vector space > over [[t,\,t]]>>, then the vectors >*\*t>*\> with ,\,\\\> form a strong basis for >. Given such a basis element > and a vector \>, we denote by >> the coefficient of > in . Similarly, if \>> is a grid-based series, then we denote >=\\>f,\>*\\\>>. Let > be a totally ordered monomial group with >-powers ( for any \\> and \>>, there exists an \\> with =\>). Given a series \[[x]]>>>, the solutions in >> to the equation <\equation> f(x)=0(x\1) can be computed using the Newton polygon method. This method is explained in detail in in the case when is a polynomial in and readily adapts to the case when is a power series (see ). Let us briefly recall the main definitions and results. The series may also be regarded as a series in [[x]]>> and its dominant coefficient =c\\[[x]]> in this representation is called the of . If > is a polynomial, then we rather call it the of . The valuation of > in is called the and we denote it by 1> f>. If \\>>> is infinitesimal, then the additive and multiplicative conjugates >,f\>\\>[[x]]>> are defined by <\eqnarray*> >(x)>||+x)>>|\>(x)>||*x).>>>> For \\>>, this allows us to define the Newton degree associated to the ``slope'' > by \> f=deg1> f\>>. The following properties are easy or proved in: <\proposition> Given \>[[x]]>>, ,\\\>> with \\>, \\>>> with \\> and deg\> f>, we have <\eqnarray*> \> f*g>||\> f+deg\> g>>|\> f>|>|\> f>>|\> f>>||\> f>>|\> f|\ x>>>||\> f-l>>>> <\proposition> If 1> f=1>, then )> admits a unique solution in >>>. <\proposition> Let \\>,\>>, >>, =\>>, \>>\\[x]> and =val N>. Then <\equation*> deg\> f>=\\deg1> f. Moreover, if =d=deg1> f> and > is the unique solution to the equation <\equation> f|\ x>(\)=0(\\1), then for any non zero |~>\\>>> with dominant monomial |~>>, we have <\equation*> deg|~>> f+|~>>\deg1> f. The theory adapts with minor modifications to the case when \>[[x]]> admits its coefficients in a >-vector space >. In that case, we are still looking for solutions of() in >>. However, in proposition it is only guaranteed that () admits at most solution. In particular, the equation() cannot necessarily be solved in proposition. Nevertheless, there exists a strong basis element > of > such that the coefficient >\\[x]> of> in \[x]> has Newton degree . Then proposition still holds if we take > to be the solution of f>/\ x)(\)=0>. Indeed, if |~>=\|~>>> and =N,\|~>>>>, then > has the property that ,d-1>=0>. Consequently, >> does not admit a root of multiplicity and neither does >. Let us now return to the case of a multivariate series \>>. We will assume that > is an ordinary coordinate in > and let > be the coordinates in > except >. In particular, =\*x>> and>=\>[[x]]>. Considering +f*x+\>> as a power series in >, we may then evaluate at a point \\> using <\equation*> f(\)=f(\)+f(\)*x+\\|^>[[x]]|^>>. Now the uniform Newton degree of in > is defined by <\equation*> deg\1> f=sup\\>>>|)\0>>>>>>deg1> f(\)\\\{\\}. This definition extends to the case when the coordinate > is just ordinary in : it suffices to replace > by > and an infinitesimal coordinate > which satisfies no constraints. The non trivial fact that \1> f> is actually finite will follow later from the possibility to uniformize . Nevertheless, the finiteness is trivially guaranteed in one particular case: <\proposition> Let =\[[x]]> and assume that is uniform as a series in >>. Denoting by \>\\>[[x]]> the dominant coefficient of , we have <\equation*> deg\1> f=val> N. In fact, )>=N(\)> and 1> f(\)=val> N(\)> for all \\>> with )\0>. The properties stated in proposition admit natural uniform analogues: <\proposition> Given \>>>, ,\\\> with \\>, \\>> with \\> and deg\\> f>, we have <\eqnarray*> \\> f*g>||\\> f+deg\\> g>>|\\> f>|>|\\> f>>|\\> f+\>>||\\> f>>|\\> f|\ x>>>||\\> f-l,>>>> where +\>(x,\,x)=f(x,\,x,x+\,x,\,x)>. For the remaining propositions and, it is useful to define an additional concept: we say that \>> is if there exist two monomials ,\\\>, such that admits a >-uniform Newton polytope >, which is contained in /\)>*\>. If > contains at least two elements, then there exist unique \>, \x>*\*x>> and \\> with \(x/\)>*\>. We will call > the for and > its associated . If, moreover, we have \(x/\)>*\>, then we say that is -Newton prepared, with associated Newton polynomial f/\)*\>*x\\>[x]>. Notice that proposition applies as soon as \1>. <\proposition> Assume that \>> is -Newton prepared, with principal coordinate> and \1> f=1>. Then )=0> admits a unique infinitesimal solution in >>. <\proof> This is a direct application of . Proposition is not good enough though, since the unique solution > may depend on ,\,x>. Consequently, we will have to replace > by the coefficient x>=[x*\*x] \> of *\*x> in >. Unless > is a diagonal community, we have no means to compute this coefficient. In practice, we therefore rather compute a pseudo-coefficient *\*x> in >, which we will denote by x>>>. <\proposition> Assume that \>> is -Newton prepared, with slope \1> and Newton polynomial . Let \\>> be uniform with dominant term >. Then <\equation*> deg\\> f+\>=\=max|^>\\>>val N(|^>). Moreover, in the case when =d=deg\1> f>, let > be a strong basis element for > such that >\0>, and assume that =\x>>>, where > satisfies <\equation*> f>|\ x>(\)=0(\\1). Assume that +\>> is -Newton prepared, with slope |~>\\>, and let |~>\\>> be uniform, with dominant monomial |~>>. If +\+|~>>> is -Newton prepared, then <\equation*> deg\|~>> f+\+|~>>\deg\1> f. <\proof> The first assertion follows directly from the first assertion of proposition. As to the second assertion, let > be the monomial, such that the dominant monomials of>> are /\)*\> with ,d>. Then the dominant monomials of ,x+\>> are /|~>)*\*(|~>/\)> with ,d>. Writing =\+\>, the definition of pseudo-coefficients states that each dominant monomial of > depends on at least one of the coordinates ,\,x>. Now <\eqnarray*> ] f,x+\>>|| f>|\ x>(\+\)>>||| f>|\ x>(\)*\+\.>>>> Since <\equation*> f>|\ x>(\)\\/\, it follows that the unique dominant monomial *|~>/\> of ] f,x+\>> is a dominant monomial of /\)*\>. In other words, |~>> must be a dominant monomial of>, even though|~>> is free from ,\,x>. This contradiction completes the proof. A first useful subalgorithm which we will need is . Given an arbitrary monomial \x>*\*x>>>, we will frequently have to decide whether \1>, \1> or \1>. In general, it may happen that none of these conditions are satisfied globally on >>. In such cases, we will use polarization in order to decompose >> into at most three subregions, such that on each subregion we have either \1>, \1> or \1>. Now the subregions where \1> and \1> simply correspond to the imposition of the corresponding asymptotic constraints. In the remaining case, we write =(x/\)>> with \\> and \x>*\*x>>. For a new invertible parameter >, and modulo a suitable ramification, it then suffices to perform the refinement \\*(\+)> with \1>. <\algorithm|polarize> )> a monomial =x>*\*x>> with ,\,\\\> and \0> refine the coordinates such that we either get \1>, \1> or \1> <\body> Ramify the coordinates, such that ,\,\,\/\,\,\/\\\> If neither \1>, =1>, nor \1>, then separate the following cases: <\enumerate> Impose the constraint \1> Introduce an invertible parameter >. Refine =x\/\>*\*x\/\>*(\+)> with \1>. Impose the constraint \1> In section, we have seen the usefulness of Newton prepared series. Assuming that we have an algorithm > for the uniformization of series in at most variables, we will now present an algorithm for the Newton preparation of a series. Our algorithm assumes given an ordinary coordinate > and will only use refinements in the remaining coordinates >. Hence, the refinements never involve >, although they may involve ,\,x>. If does not become uniform after the preparation, then it should be noticed that the principal coordinate > of may satisfy k>. <\algorithm|prepare> >(f,x)> a non zero series \>> and an ordinary coordinate > for refine the coordinates other than > such that becomes Newton prepared <\body> Let > the coordinates other than > and =\[[x]]>, so that \>> >(f)> Let be the dominant monomial of \>> >(c)> and let be the valuation in > of Expand +f*x+\> as a series in > For {0,\,d-1}> do >(f)> Let *x>,\,\*x>}\>(f)>, with \\\\=d> and ,\,\\\> For all j\r> do ((\/\)-\)>/(\/\)-\)>)> The algorithm for performing the Newton preparation is quite straightforward. We first uniformize as a series in >> and let deg\1> f>. Writing +f*x+\> as a series in >, the tail *x+f*x+\> will then be uniform. After uniformizing the remaining coefficients ,\,f>, it follows that the Newton polytope of has the form ={\*x>,\,\*x>}>, with \\\\=d> and ,\,\\\>. Now a sufficient condition for to be Newton prepared is that all the slopes /\)-\)>> are pairwise comparable for >>. This is forced using polarization, where we notice that only refinements which do not involve > are needed. Assume now that is Newton prepared, but not uniform, and let > be its principal coordinate with slope >. At this point, we might in principle proceed by polarizing /\>. However, in order to force astrict decrease of the Newton degree (using proposition), we have to slightly modify the procedure for polarization and introduce a new case for handling the situation when the Newton polynomial has non zero roots of maximal multiplicity. In this new case, which will only be needed if > is an ordinary coordinate in (see also remark below), we apply a Tschirnhausen transformation. <\algorithm|blowup> >(f)> a non uniform Newton prepared series \>> refine the coordinates such that becomes ``more uniform'' <\body> Let \>(f)> Let >, \x>*\*x>> and \\> be such that \(x/\)>*\> (\)> and return whenever \1> Ramify the coordinates, such that \\> Let \\>f/\)>*\>*x>\\>[x>]> and let \> be largest with \0> If > is ordinary in then separate the following cases <\enumerate> Impose the constraint \\> Impose the constraint \\> Introduce an invertible parameter > with N*(x-\)> Refine \\*(\+)> with \1> Introduce an invertible parameter > with *(x-\)> Let > be a strong basis element for > such that >\0> Let > be the coordinates in > except > Let \\>> be the unique solution to f>/\ x)(\)=0> Refine \(\,k)+\*|~>> with \1> Else (x/\)> In the last step of 4, the subalgorithm (\,k)> computes a pseudo-coefficient >x>> of *\*x> in >. In addition, modulo additional refinements and recursive calls of >, we enforce the difference -\x>>> to be uniform. This will ensure that the x>>> keeps its status of being a pseudo-coefficient, even after subsequent refinements of ,\,x>. In the exceptional case that one of the coordinates ,\,x> is refined during the uniformization of -\x>>>, we do not require -\>x>> to be uniform on exit. Let us finally notice that, in the case when > is an effective diagonal community, we may simply take (\,k)\\x>> to be the genuine coefficient of *\*x> in >. <\algorithm|pseudo> a series \>>, where > are the coordinates in > except > a pseudo-coefficient x>>> of *\*x> in . Moreover, if no refinement on ,\,x> occurs during the execution, then x>>> is guaranteed to be uniform on exit. <\body> Repeat <\indent> Let > be a pseudo-coefficient of *\*x> in . If > is regular or a refinement on ,\,x> has occurred, then return > >>(f-\)> We are now in a position to state the main algorithm for the uniformization of . As long as is not uniform, we keep Newton preparing and blowing up the resulting edge until becomes uniform. If no ordinary coordinate exists, then we perform a sequence of suitable polarizations which will either make uniform or induce a refinement which makes one of the coordinates ordinary. <\algorithm|uniformize> >(f)> a series \>> refine the coordinates such that becomes uniform <\body> While is not uniform do <\indent> If there exists an ordinary coordinate > in then take maximal and <\indent> >(f,x)> If is not uniform, then >(f)> Else <\indent> \>(f)> Let be minimal such that \\*x>*\*x>\|\2> for some ,\,\> For all ,\,\,\\\> with /\=x>*\*x>> and \0> do <\indent> ((x>*\*x>)>/(x>*\*x>)>)> Pick ,\\\> with /\=x>*\*x>> and \0> and (x>*\*x>)>> <\remark> We do not know of any effective test whether a given coordinate > is ordinary in . For this reason, we will use a slightly weaker test. For every series, we maintain a set > of coordinates which are ensured to be ordinary for , and simply check whether > is in this set. Whenever > is refined, we add > to >. The coordinate > is removed from >, whenever an-refinement \\*(|~>+)> or \\+\*>> occurs, where k> and > or > is not ordinary in >. Our slightly weaker test is still sufficient for making the termination proof below work. <\theorem> The algorithm > is correct and terminates. <\proof> The algorithm is clearly correct. Let us first prove the termination modulo the termination of the subalgorithm >. Assume for contradiction that it does not terminate on a given input , but that each of its recursive invocations for lower dimensional series terminates. We observe that all coordinates cannot indefinitely remain non ordinary. Indeed, the polarizations in the second part either strictly reduce the number of elements in > or end up by refining one of the coordinates, thereby making it ordinary in . Let be maximal such that the coordinate > is refined infinitely many times. Modulo picking up the computation at a later point, we may assume without loss of generality that the coordinates ,\,x> are never refined (although new asymptotic constraints may be imposed upon them). After one refinement of >, the coordinate > will then remain ordinary in . Consider the series expansion +f*x+\> of in > after a call of >(f,x)>>. Then each of the coefficients > for which \x>*\*z>*x*x>*\*x>\\> is uniform. In particular, if is not uniform, then > contains at least two elements whose exponents in> differ. Consequently, the principal coordinate > for satisfies k>. We claim that we cannot have k>. Otherwise, >(f)> necessarily falls in a case which results in the uniformization of, since the coordinate > is never refined. After return, the main algorithm then terminates. Consequently, we are infinitely often in the situation that >(f)> is called with> as the principal coordinate of . By proposition, the Newton degree \1> f> at least decreases by one for every two such calls. Since \1> f\\>, this cannot happen infinitely many times. This contradiction proves the termination of > modulo the termination of >. To complete the proof, let us consider one of the recursive calls (f,k)> and prove its termination. Without loss of generality, we may assume that none of the coordinates ,\,x> is refined during the recursive calls of >(f-\)>. Intuitively speaking, the termination of (f,k)> is equivalent to the termination of >(f-fx>)>, interspersed with a sequence of additional refinements. More precisely, assume that the loop does not terminate. In a similar way as above, let \k> be maximal such that >> is refined infinitely many times. Without loss of generality, we may assume that >> remains ordinary in x>>. At each recursive call of >(f-\)>, each of the dominant monomials of > involves one of the coordinates ,\,x>. At the first subsequent call of >>(f-\)>, it follows that the dominant edge of > is actually a dominant edge of x>>. Now in a similar way as above, it occurs infinitely often that >> is the principal coordinate of > (and thus of x>>) for this call. But then >\1> (f-fx>)> decreases every two such calls, which leads to the desired contradiction. We are grateful to Jean-Philippe Rolin, for his careful reading of successive versions of this manuscript and many useful discussions on the subject. <\bibliography|bib|alpha|all.bib> <\bib-list|FKP06> Gabor Bodnar. . PhD thesis, University of Linz, Austria, 2001. RISC Report Series 01-04. A.Frühbis-Krüger and G.Pfister. Algorithmic resolution of singularities. In , volume 324 of , pages 157--184. Springer, 2006. H.Hironaka. Resolution of singularities of an algebraic variety over a field of characteristic zero. , 79:109--326, 1964. J.vander Hoeven. . PhD thesis, École polytechnique, Palaiseau, France, 1997. J.vander Hoeven. , volume 1888 of . Springer-Verlag, 2006. O.Villamayor. Constructiveness of Hironaka's resolution. , 22:1--32, 1989. O.Zariski. Local uniformization theorem on algebraic varieties. , 41:852--896, 1940. <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > |\>|?>> |\>|?>> > > > > > > |\>|?>> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> vdH:ln Hir64 Vil89 Bod:phd FKB06 Zar40 vdH:phd vdH:ln vdH:ln vdH:ln vdH:phd vdH:phd vdH:ln vdH:ln vdH:ln vdH:ln vdH:ln <\associate|toc> |math-font-series||font-shape||1.Introduction> |.>>>>|> |math-font-series||font-shape||2.Conical varieties> |.>>>>|> |2.1.Grid-based series |.>>>>|> > |2.2.Grid-based series with parameters |.>>>>|> > |2.3.Conical power series |.>>>>|> > |2.4.Refinements |.>>>>|> > |math-font-series||font-shape||3.Machine computations with conical series> |.>>>>|> |3.1.Computations with respect to changing coordinates |.>>>>|> > |3.2.Non deterministic algorithms |.>>>>|> > |3.3.Local communities |.>>>>|> > |3.4.Cartesian representations |.>>>>|> > |3.5.Diagonal communities and how to avoid them |.>>>>|> > |math-font-series||font-shape||4.Uniformization of conical series> |.>>>>|> |4.1.The Newton polygon method in dimension two |.>>>>|> > |4.2.Uniform aspects of the Newton polygon method |.>>>>|> > |4.3.Polarization |.>>>>|> > |4.4.Newton preparation |.>>>>|> > |4.5.Blowing up edges |.>>>>|> > |4.6.Uniformization |.>>>>|> > |math-font-series||font-shape||Bibliography> |.>>>>|>