> <\body> >|~>>>>>>>>>>>> ||<\author-address> Département de Mathématiques ( 425)Université Paris-Sud91405 Orsay CedexFrance >|>> <\abstract> In this survey paper, we outline the proof of a recent differential intermediate value theorem for transseries. Transseries are a generalization of power series with real coefficients, in which one allows the recursive appearance of exponentials and logarithms. Denoting by > the field of transseries, the intermediate value theorem states that for any differential polynomials with coefficients in > and g> in > with 0>, there exists a solution \> to with h\g>. In this survey paper, we will outline the proof of a recent differential intermediate value theorem for transseries (with a few corrections in ). A is a generalization of a formal power series, in which one allows the recursive intrusion of exponentials and logarithms. In this paper we will only deal with real transseries at infinity (\>). Some examples of such transseries are <\eqnarray*> >||+x+\+e+x*e+\+e+\>>|>||+x*e+\>+x*e+x*e+\>+x*e+x*e+\>+\>>|>||+2*log x*e+6*log x*e+24*log x*e+\>>|>||+3+4+\>>|>||+x+x+\+e x>+e x>+\+e x>+\>>>> The transseries ,f> and > are examples of transseries, the first two being convergent and the last one divergent. In section we will construct the field of grid-based transseries >, which will be our main object of study in the sequel. More exotic, transseries are =\(x)> and >. Notice that > satisfies the functional equation <\equation*> f(x)=+f(x)+f(e x>). Historically speaking, transseries appeared independently in at least three different contexts: <\description> The first construction of the field of transseries goes back to Dahn and Göring , who were interested in non standard models for the theory of real numbers with exponentiation. Much recent progress on this subject has been made through works by Wilkie, van den Dries, Macintyre and others. The theory of transseries also bears many similarities with Conway's theory of surreal numbers. The main current application of transseries is in the proof of Dulac's conjecture by Écalle . More precisely, Écalle proved that a planar real analytic vector field can only have a finite number of limit cycles. Essentially, he shows that the Poincaré return map near a limit cycle can be represented by an >. Formally, such a function is a transseries, but through a complicated process of accelero-summation, this formal transseries can be given an analytic meaning. Since the transseries form a totally ordered field, one must have (x)=x>, (x)\x> or (x)\x> in a small neighbourhood of the limit cycle. In other words, either all or no orbits are periodic in this neighbourhood. Transseries also implicitly appeared during the research of algorithms for doing asymptotic analysis . In the formal context of transseries, we were able to do such effective computations in a more systematic way . There is no doubt that the combination of the techniques from these three different areas will lead to an extremely powerful theory, whose development is far from finished. A nice feature of such a theory will be that it will both have theoretical and practical aspects (we expect effective numerical accelero-summation to have many applications in numerical analysis, for instance). Before dealing with all these aspects of the theory of transseries, it is interesting to study which kind of asymptotic problems might be adequately modelled by transseries (at least from the formal point of view). For instance, it is well known that linear differential equations with power series coefficients in [[z]]> always have a full basis of solutions of the form <\equation*> f=\()*z>*e)>, where is a polynomial, \\> and ()\\[log z][[]]> a power series whose coefficients are polynomials in of uniformly bounded degree. \ It is tempting to generalize this result to non-linear differential equations and even to more general functional equations. When considering non-linear equations, say algebraic ones, the first difficulty one has to face is that the set of solutions to such equations is closed under composition. For instance, given an infinitely large indeterminate , we have to incorporate iterated exponentials ,e>,e>>,\> of arbitrarily high orders into our theory. This is problematic if is complex, because > behaves very differently in the positive and negative halfplanes. In order to do asymptotics, a reasonable next step is therefore to reduce one's attention to real functions without oscillatory behavior. Of course, this is a very strong restriction, since we will no longer be able to solve simple equations like <\equation> f+1=0. Nevertheless, this restriction does allow us to construct a clean, totally ordered field of formal grid-based transseries > in an infinitely large real variable (see section ). In this field, we will have asymptotic relations > and > (using the notation of Hardy: g\f=o(g)> and g\f=O(g)>). Furthermore, > is closed under differentiation, composition and functional inversion. So what about solving differential equations? Since even simple equations such as () do not always have solutions, we have to search for existence theorems of solutions which take into account the realness of the context. In this paper, we outline a proof of the following such theorem: <\theorem> > Let be an algebraic differential polynomial with coefficients in >. Given g> in >, such that 0>, there exists a solution \> of with h\g>. In particular, the theorem implies that an algebraic differential equation of odd degree like <\equation*> P(f)=f+\(\(x))*(f*f*+f)+e+e x>>*f-log log x=0 always has a solution in >. Our proof is based on a differential version of the Newton polygon method, which will be sketched in section . Using a variant of Stevin's dichotomic method to find roots of continuous functions, we next indicate (section 4) how to find a solution of . For full proofs, we refer to . In section , we will finally discuss some perspectives for the resolution of more general algebraic differential equations using complex transseries. Let be a constant field and ,>>)> a totally ordered, multiplicative, commutative group of . For instance, the monomial group =x>> of real powers of an infinitely large is a monomial group with >\x>\\\\>. The relation >>> corresponds to Landau's -symbol and >>> to the -symbol. We write \\> if \\> and \\>. A is a mapping \C> with well-ordered support. We usually write >=f(\)> and \\>f>*\> and the support of is the set of \\> with >\0>. The condition on the support of means that there does not exist an infinite sequence \\\\> of monomials with >\0> for all . We write ]]> for the set of generalized (or well-ordered) power series. It is known since long that the well-ordering condition allows us to define a natural multiplication on ]]> by <\equation*> f*g=\\>\\>f>*g/\>*\. This multiplication actually gives ]]> the structure of a field. Examples of well-ordered series in [[x>]]> are <\eqnarray*> >||+x+1+x+\>>|>||+x+\>>|>||+x+\+x+x+x+\+\>>>> Notice that the order type of > is transfinite (namely >) and that =g/(x-1)>, where <\equation*> g=x++>+\ satisfies the functional equation )>. The more exotic types of well-ordered series like > and > usually do not occur when studying differential equations. We say that a subset > of > is if there exists a monomial \\> and a finite number of infinitesimal monomials \1,\,\\1> in >, such that <\equation*> \\\>*\*\>*\. In other words, for each \\>, there exist ,\,i\\> with =\>*\*\>*\>. We say that a series is if its support is grid-based. It can be shown that the set >> of grid-based series forms a subfield of ]]>. Notice that the support of a grid-based series can still be transfinite when > is a non-archimedian monomial group. Indeed, the order type of -e)\\>*e*x>>> is again >, where >*e*x>> is ordered by >*e*x>\1\\\0\(\=0\\\0)>. The fields >> ( ]]>) can be given further structure. Given C>\\{0}>, we define its > to be the >-maximal element in . The dominant coefficient > of is the corresponding coefficient =f>>. We extend > to >> by g\\\\> (here we assume 0>; we take g> for all and 0> whenever 0>). Each series C>> also admits a into a , a , and an : <\equation*> ||>>||>||>>>|||>||>||>>|||\1>f>*\>||>||\1>f>*\>>>>> Finally, if is a totally ordered field, like >, then we define a total ordering on >> by 0\(f\0\c\0)>. Another important operation on >> ( ]]>) is . A family )I>\C>> is said to be grid-based (or strongly summable), if <\description> I>supp f> is a grid-based subset of >. For each \\>, the set I\|\\supp f}> is finite. Given such a family, we may define its sum I>f> by <\equation*> I>f=\\>(I>f>)*\. Given a classical power series +a*z+a*z+\\C[[z]]> and an infinitesimal \C>>, it can for instance be shown that *\)\>> is a grid-based family. Consequently, \=a(\)=a+a*\+a*\+\> is well-defined. > We now want to define a field of grid-based series =\>\\>>> with additional functions and , such that is defined for all and for all 0>. Classically , one starts with the field >>> and successively closes it under exponentiation and logarithm. We will follow the inverse strategy: we start with the field of logarithmic transseries > and close it under exponentiation. Both constructions turn out to be equivalent in the grid-based setting. Consider the monomial group =\={x>*(log x)>*\*(log x)>\|\,\,\\\},> where > stands for the iterated times. The infinitesimal monomials of > are of the form <\equation*> (log x)>*\*(log x)>, with l>, \0> and \0>. Elements of =C>> are called . We now have to define a logarithm on >={f\\\|f\0}>. Given \>>, we may write <\equation*> f=c*x>*\*(log x)>*(1+\), where \>>, ,\,\\\> and \1>. Then we define <\equation*> log f=log c+\*log x+\+\*log x+log (1+\). Here we remind that )=\-*\+*\+\> is well defined as the sum of a grid-based family. We next have to show how new exponentials can be added to >. The main point here is to decide which exponentials give rise to new monomials. In >, we observe that \>> is a monomial if and only if \>> (in which case we say that is ). We will use this observation as a criterion for extensions by exponentials. So assume that we have constructed the field =\>> of transseries of n>>>. The case has been dealt with above. Then we take =exp \>.> In other words, each monomial in > is the formal exponential of an element in >>. The asymptotic ordering on > is inherited from the usual ordering on >>: \e\f\g,> for all \>>. Finally, the exponential of an arbitrary element \> exists in > and is given by <\equation*> exp f=(exp f>)*(exp f)*(exp f>). The logarithm remains totally defined as an inverse function of on >>, which guarantees that \\> at each step. <\example> (1++>+\)>\\>, but >\\>. > The field <\equation*> \=\\\\\\\\\\\>=\> is called the field of in . The exponentiation and logarithm are totally defined on > >>. It can be shown that the usual axioms and for exponentiation are satisfied in >. The exponentiation also satisfies the following more interesting properties <\equation*> \n\\exp f\|i!> in relation to the ordering. These properties imply in particular that is increasing and that f\f> for all \,\>=\>\\>={f\\\|f\0\f\1}>. We will now present some techniques for doing effective computations with transseries. Although these techniques will not essentially be used in the sequel, they will allow the reader to get a better understanding of the nested structure of transseries and the transfinite and non-archimedian nature of their supports. The transmonomial group > can actually be seen as an ordered >-vector space, where addition is replaced by multiplication and scalar multiplication by raising to real powers. As a consequence, we also have and -like relations on this vector space: we say that \\> is than \\>, and write \\>, if and only if \>\<\|\|\>\\<\|\|\>\\<\|\|\>> for all \\>. Here \\<\|\|\>=\> if \1> and \\<\|\|\>=\> otherwise. The flatness relation can be extended to >> itself: given \>>, we define <\eqnarray*> g>|>|f\<\|\|\>\log \<\|\|\>g\<\|\|\>;>>|g>|>|f\<\|\|\>\log \<\|\|\>g\<\|\|\>;>>|g>|>|f\<\|\|\>\log \<\|\|\>g\<\|\|\>.>>>> Here we understand that f\<\|\|\>=\|f\|> if 1> and f\<\|\|\>=\|f\|> otherwise. For instance, \e>, but x>. Also, e x>\e\x\e>>. An is a tuple =(\,\,\)>, with ,\,\\\,\>> and \\\\>. Such a basis generates an >=\>*\*\>>. The field of grid-based series ;\;\>=\>>> is naturally included in > and elements of ;\;\>> may be expanded recursively ,\,\>: ||>f>*\>;>>|>>||>f,\>*\>;>>||>|>|,\,\>>||>f,\,\>*\>.>>>>> Conversely, for any transseries in > there exists an asymptotic basis ,\,\)> with \;\;\>>. In fact, we may always choose ,\,\> to be transmonomials, although this is not always the most convenient choice. For instance, from a computational point, it is more efficient to see <\equation*> /(x-1)>-e>> as an element of /(x-1)>;e>>> rather than +x+\+x>;e>>>. In the grid-based context, the fact that > is archimedian implies that the types of the supports of the infinite sums in recursive expansions are at most >. Consequently, the type of the support of a series in \;\;\>> is at most >. For instance, the type of the support of <\eqnarray*> /(1-x-x-e)>||+x+x+x+x+\)*e+>>|||+x+x+x+x+\)*+>>|||+x+x+x+x+\)*e+>>|||>>>> is >. In fact, this transseries may also be interpreted as a multivariate Laurent series /(1-z-z-z)> in which we substituted \x>, \x>, \e>. We call this a of the transseries. Cartesian representations exist in general and they are best suited for doing effective computations on transseries . For computations which involve exponentiation and differentiation, asymptotic bases do not necessarily carry sufficient structure. A is a tuple =(\,\,\)> with <\description> \\\\\>. =exp x>, with \>. \exp \;\;\>> for 1>. A transbasis is necessarily an asymptotic basis and any transseries may be expanded a suitable transbasis. In fact, the following incomplete transbasis theorem holds. > be a transbasis and \> a transseries. Then there exists a supertransbasis \\>, such that \>>.>> <\example> +x>)> is a transbasis and |1-x>>)\\+x>>>. The tuple >>,e>)> is not a transbasis. Differentiation may be defined inductively on all > as follows. We start by defining the differentiation on the monomial group >. If , then we set =\*|x>+\+|x*\*log x>,> for each monomial =x>*\(log x)>\\>. If 0>, then each monomial in > has the form =e> for some \>> and we define =f*\,> where \\> has already been defined by the induction hypothesis. Having defined the derivative of each monomial in >, we ``extend the derivation to > by strong linearity'': <\equation*> \\>f>*\=\\>f>*\. In order for this to work, we have to check that for each grid-based subset > of >, the set \\>supp \> is again grid-based. Now if ,\,\> and > are such that \\>*\*\>*\>, then <\equation*> \\>supp \\(supp \>\\\supp \>\supp \>)*\. Here >=f/f> denotes the logarithmic derivative of . For the definition of general composition, we refer to . In what follows we will only use right compositions with and , which are defined in a straightforward way using the systematic substitution of for in a transseries (in particular, transmonomials map to transmonomials). In the sequel, we will denote =f\exp> for the of a transseries \> and =f\log> for its . In this section, we will show how to generalize the Newton polygon method in order to solve like P(f)=0(f\\).> Here \[f,f,\,f]> is a differential polynomial with transseries coefficients and \\> a transmonomial. We also allow > to be a formal monomial with \\> ( \\> for all \\>) in order to cover the case of usual algebraic equations. The fact that we consider differential equations ( with the asymptotic side condition \>), enables us to associate invariants to the equation (), which prove to be very useful when applying the Newton polygon method. > The differential polynomial is most naturally decomposed as P(f)= > P>*f>> Here we use vector notation for tuples =(i,\,i)> and =(j,\,j)> of non-negative integers: <\eqnarray*> \|>||>|\\<\|\|\>>||+\+i;>>|\\>|>|\j\\\i\j;>>|>>||i>*(f)>*\*(f)>;>>||\>>|||i>*\*|i>.>>>> The -th of is defined by =\\<\|\|\>=i>P>*f>,> so that P.> along orders> Another very useful decomposition of is its : P(f)= > P]> f]>> In this notation, > runs through tuples =(\,\,\)> of integers in ,r}> of length deg P>, and ]>=P(1)>,\,\(l)>]>> for all permutations of integers. We again use vector notation for such tuples <\eqnarray*> \|>||>|\\<\|\|\>>||+*\*+\\|>;>>|\\>|>|\|=\|\\|\\\\\\\\\|>\\\|>;>>|]>>||)>*\*f\|>)>;>>||\>>|||\>*\*\|>|\\|>>.>>>> We call \\<\|\|\>> the of > and P\<\|\|\>=max\|P]>\0>\<\|\|\>\\<\|\|\>> the of . > It is convenient to denote the successive logarithmic derivatives of by >>||/f;>>|i\>>||\\\> times)>.>>>>> Then each > can be rewritten as a polynomial in terms of >,\,fi\>>: ||>|>||>*f;>>|>||>)+f\>*f>)*f;>>|>||>)+3*f\>*(f>)+(f\>)*f>+f\\>*f\>*f>)*f;>>||>|>>>> We define the of by =(i,\,i)>P\\>*f\\>,> where \\>=f>*(f>)>*\*(fr\>)>.> Now consider the lexicographical ordering > on >, defined by \\>|>|\j)\>>|||=j\i\j)\>>|||\>>|||=j\\\i=j\i\j).>>>>> This ordering is total, so there exists a maximal for > with \\>\0>, assuming that 0>. For this , we have )\P\\>*f\\>> for all , whose dominant monomial is sufficiently large. Given a differential polynomial and a transseries it is useful to define the and > and h>> of w.r.t. and the > of as being the unique differential polynomials, such that for all , we have (f)>||>|h>(f)>||>|(f\)>||.>>>>> The coefficients of > are explicitly given by P>= \\> |\>*h-\>*P>.> The coefficients of h>> are more easily expressed using decompositions along orders: Ph,[\]>= \\>|\>*h- \]>*P]>.> The coefficients of the upward shifting (or compositional conjugation by >) are given by (P\)]>=\\>s,\>*e\\<\|\|\>*z>*(P]>\),> where the ,\>> are generalized Stirling numbers of the first kind: |||||,\>>|=>|,\>*\*s\|>,\\|>>;>>|>|| s*x*f(log z).>>>>>> In order to solve (), the first step is to find all possible dominant monomials of solutions, together with their coefficients. In the classical setting of algebraic equations, such potential dominant monomials can be read off graphically from the slopes of the Newton polygon and the corresponding coefficients are roots of the Newton polynomials associated to these slopes. In the differential setting, several phenomena make it more difficult to find the potential dominant terms in such a direct way. First of all, in the algebraic setting, potential dominant monomials always correspond to the cancellation of two terms in of different degrees. In the differential setting, cancellations may also arise in a single homogeneous component. For instance, the differential equation <\equation*> f*f-(f)=0 has *e*x>> as its general solution. Another difficulty is that differentiation does not preserve valuation: we usually do not have \f> for transseries \>. Consequently, even if we know that the dominant monomial corresponds to the cancellation of two single terms in of different degrees, then the potential dominant monomial can not be read off directly from the dominant monomials of these terms. For instance, in the equation <\equation*> (f)-e>=0, the only potential dominant monomial is =e*e/2>>. However > is not the square root /2>> of the quotient of the dominant monomials >> and of the coefficients of and )> in . In order to solve these problems, we use the combination of several ideas. First of all, we will no longer seek to read off potential dominant monomials directly from the Newton polygon. Instead, we will characterize when is a potential dominant monomial, so we will only have to consider horizontal slopes. Then > will be a potential dominant monomial for the equation () if and only if is a potential dominant monomial for the equation <\equation*> P\>(f)=0(f\\/\). A second important tool is upward shifting. Although we do not always have \f>, we do have >\f> for all purely exponential transseries with 1>. Here a is a transseries which can be expanded with respect to transbases ,\,\)> with =e>. For instance, >+e-x>+e-2*x>+\> is purely exponential, but and >> are not. Any transseries becomes purely exponential after a sufficient number of upward shiftings. In order to decide whether is a potential dominant monomial for (), it is interesting to study the nature of the dominant part of after a sufficient number of upward shiftings. Setting =max>\>>>, we define this of to be the differential polynomial <\equation*> \(c)=>P,\>*c> with coefficients in >. Denoting by )> the -th upward shifting of the differential polynomial , the following result can be proved : Let be a differential polynomial with purely exponential coefficients. Then there exists a polynomial \[c]> and an integer >, such that for all \<\|\|\>P\<\|\|\>>, we have )>=Q(c)*(c)>>.> <\example> For -(f)>, we have <\eqnarray*> >||-f*f-(f))*e>>|\>||*e*e>+(f*f-f*f-(f))*e*e>>>|\\>||*e*e>*e>>+\>>||>|>>> and <\eqnarray*> >||-(c)>>|>>||-c*c-(c)>>|\>>||>>|\\>>||>>||>|>>> In particular, we see that )>=-c*c> for all 2> (whence =-c*c>; see below). The polynomial )>> in proposition is called the >, and we denote it by >. More generally, the differential Newton polynomial associated to a transmonomial > is \>>>. We say that > is a for (), if and only if \\> and \>>> has a non trivial root \>>. Given such a root , we call > a for (). It should be noticed that potential dominant monomials are not always dominant monomials of solutions in the context of transseries. Indeed, an equation like <\equation*> f-2*f+1+e=0 has no transseries solution, although it does admit as a potential dominant monomial. An important invariant of () is its , which is by definition the highest possible degree of the Newton polynomial >>> associated to a transmonomial \\>. In the algebraic setting, the Newton degree gives a bound to the number of solutions to (), when counting with multiplicities (if the constant field is algebraically closed, it actually gives the exact number of solutions). In the differential setting, the Newton degree must be non-zero if we want the equation to admit solutions. Now that we know how to define potential dominant monomials, the next question is how to find them. In fact, there are three types of potential dominant monomials >, depending on the form of \>>>. If \>>\\[c]>, then we say that > is . If \>>\(c)>>, then we say that > is . In the remaining case, when \>>\(\[c]\\\)*(c)\\{0}>>, we say that > is . The algebraic and mixed potential dominant monomials correspond to the slopes of ``what would have been the differential Newton polygon''. Differential and mixed potential dominant monomials correspond to the introduction of integration constants in the general solution to (). The algebraic and mixed potential dominant monomials can all be found by ``equalizing'' the dominant monomials >> and >> of two different homogeneous components of via a multiplicative conjugation. This is always possible : <\proposition> Let j> be such that \0> and \0>. Then there exists a unique transmonomial =\>, such that +P)\>>> is not homogeneous. We call > an for and there are clearly at most a finite number of them. All algebraic and mixed potential dominant monomials for () are necessarily equalizers, although not all equalizers are potential dominant monomials. Under the assumption that we made sufficiently many upward shiftings so that all >> can be expanded with respect to a transbasis ,\,\)> with =e>, the equalizers can be computed recursively, using the fact that >\f> for all purely exponential with 1>. <\example> Let us compute > for <\equation*> P=f*f-(f)+1. Since -(f)>\\[c]*(c)>>, we first shift upwards: <\equation*> P\=(f*f-f*f-(f))*e+1. We now have to equalize > and > via a multiplicative conjugation with >: <\equation*> P\e>=f*f-f*f-(f)-f+1. We still do not have e>>\\[c]*(c)>>, so we shift upwards once more: <\equation*> P\e>\=P\\e>>=-f-f*f*e+(f*f-(f))*e+1. At this point, we both have (P\\e>,0>)=\(P\\e>,2>)> and \e>>>\\[c]*(c)>>. In other words, =x=e>\\> is the desired equalizer. The remaining type of potential dominant monomials, the differential potential dominant monomials, corresponds to cancellations inside homogeneous parts of . Now in order to solve a homogeneous equation =0>, one usually rewrites > in terms of the -th >: <\equation*> P(f)=R(f>)*f. For instance, <\equation*> f*f-(f)=(f>)*f. In order to find the differential potential dominant monomials that correspond to cancellations inside >, we now need to ``solve (f)=0> up to '', which is equivalent to ``solving (f>)=0> up to x*\)>''. The border x*\> is special in the sense that \1/x*log x*log x*\> whenever 1> and \1/x*log x*log x*\> whenever 1>. More precisely, we have : <\proposition> The monomial \\> is a potential dominant monomial of w.r.t. (f)=0> if and only if the equation >>(f>)=0f>\>>> has strictly positive Newton degree. <\remark> We committed a small abuse of notation by treating )> as a transmonomial. In order to be painstakingly correct, we should replace )> by x*\*log x)>, where is a strict bound for the logarithmic depths of > and all coefficients of . Assuming that we have found a potential dominant term > for (), we next have to show how to find the remaining terms of a solution (or a ``solution up to )>''). This is done through a process of refinements. A is a change of variables with an asymptotic constraint +(\),> where \\> is an arbitrary transseries (and not necessarily a term in *\>; we will soon see the importance of this) and |~>\\> a transmonomial. Such a refinement transforms () into >()=0(\)> and we call it , if () has strictly positive Newton degree. The important property of refinements is that they bring us closer to solutions : <\proposition> Let > be the dominant term of > and assume that =\>. Then the Newton degree of )> is equal to the multiplicity > of as a root of \>>>. In particular, > is bounded by the Newton degree of )>. In the proposition, the multiplicity of as a root of \>>> is understood to be the least , such that there exists an > with \\<\|\|\>=i> and )>(c)\0>. Here )>=\ P/(\ f)>*\*(\ f)>>. In favorable cases, the Newton degree strictly decreases until it becomes equal to . At this point, we call the new equation , and it has at least one solution : <\proposition> Assume that the equation )> is quasi-linear. Then it admits at least one transseries solution. In fact, we proved that there exists a very special, ``distinguished'' solution, which has some nice additional properties. Given such a distinguished transseries solution > to a quasi-linear equation, all other solutions can be seen found by solving the homogeneous quasi-linear equation >()=0(\\)>. A homogeneous quasi-linear equation should really be seen as a twisted homogeneous linear differential equation. For instance, we have : <\proposition> Let \\\f> be solutions to a quasi-linear differential equation )> with =0>. Then r>. A more complicated situation is when the Newton degree does not descend to one after a finite number of termwise refinements () with \\*\>. This typically occurs in presence of almost double solutions, like in the example <\equation> P(f)=f->-e=0. When applying the naive, termwise refinement procedure, we would obtain an infinite chain of refinements: <\eqnarray*> ||(\1);>>|>||+|~>(|~>\x);>>||~>>||+|~>|~>(|~>|~>\x);>>||>|>>> Now a classical way to find multiple solutions is to differentiate the equation, which yields <\equation> 2*f->=0 in our example (). In order to find the almost double solutions, we now replace the above infinite chain of refinements by a single one <\equation*> f=\+(\1), where > is a solution to the quasi-linear equation (). More generally, the objective is to force a strict decrease in the Newton degree after a finite number of refinements, while solving partial derivatives of the equation ,\,f>. More precisely, denoting by the Newton degree of (), an is a refinement +(\),> such that <\description> The Newton degree of >()=0(\)> equals . For any |~>\>, the Newton degree of +|~>>(|~>)=0(|~>\\(|~>))> is d>. In , we proved that we may achieve an unravelling through a finite number of well-understood refinements: <\proposition> Let > be a potential dominant term for )> of multiplicity ()> has Newton degree ). Then there exists a finite sequence of refinements <\eqnarray*> >||+f(f\\);>>|>||+f(f\\);>>||>|>|>||+f(f\\),>>>> such that each > is either a term in *\> or a solution to an equation of the form <\equation*> P+\+\,\\>|(\ f>)> |\>=0(f\\), where \{0,1}> and > is a transmonomial, and such that <\equation*> f=\+\+\+(\\) is an unravelling, and > has > as its dominant term. Putting together all techniques described so far, we obtain a theoretic way to compute all solutions to asymptotic algebraic differential equations. In fact, we have shown how to compute the generic solution of such an equation (which involves a finite number of integration constants) in a fully effective way. As a side effect, we also obtained bounds for the logarithmic depths of solutions to () and the fact that such solutions are necessarily grid-based if the coefficients of () are. <\theorem> Consider an asymptotic algebraic differential equation )> with transseries coefficients in > of logarithmic depths l>>, and let , and denote its Newton degree, order weight. Then any transseries (whether grid-based or not) solution to )> lies in > and its logarithmic depth is bounded by >. This theorem has a certain number of remarkable consequences. It proves for instance that the Riemann >-function does not satisfy any algebraic differential equation over > (nor over > or >). Similarly, solutions like <\eqnarray*> >||+>>+>>+\>>|>||+e x>+e x>+\>>>> to functional equations <\eqnarray*> (x)>||+f(x>)>>|(x)>||+f(e x>)>>>> do not satisfy any algebraic differential equations over >. Using the methods from the previous section to find all transseries solutions to asymptotic algebraic differential equations, the intermediate value theorem should now be a mere application. Indeed, we will mimic the classical dichotomic method for finding roots of continuous functions on an interval where a sign change occurs. In our case, the ``transline'' > is very non archimedian, so we will have to consider non-standard intervals. The Newton polygon method will be used to restrict the interval where we search more and more by maintaining the sign change property. In section 2.6 of , we have shown that (non-standard) transseries intervals are always of the form ,\)>, ,\]>, ,\)> or ,\]>, where > and > are in the ``compactification'' of . In the sequel, we will only need to consider intervals ,\)>, with non-standard > (and >) of the following forms: <\itemize> =\\>, with \\>; =\\>, with \\>; =\\*\>, with \\> and where > is a transmonomial. =\\*\>, with \\> and where > is a transmonomial. =\\>, with \\> and =(x*log x*log log x*\)>. Here > and > respectively designate formal infinitely small and large constants \>\\>> and >\\>. Similarly, > and > designate the infinitely small and large constants >> and >>. We may interpret > as a of the transline into two pieces ={f\\\|f\\}\{f\\\|f\\}>. Notice that <\eqnarray*> \\|f\}>||\\|\g\\:g\1\f=g};>>|\\|f\}>||\\|\g\\:g\1\f=g}.>>>> For instance, ,)> contains all transseries which are larger than >, like > and >, but not *(log x)>. Now instead of directly proving the intermediate value theorem, it is more convenient to prove a generalization of it for non-standard intervals. Before doing this, we first have to extend the notion of the sign of to the end-points of non-standard intervals. After that, we will be able to state the more general intermediate value theorem and prove it using the Newton polygon method. We will show that, given a cut > of one of the above types, the function (f)=sign P(f)> may be extended by continuity into > from at least one direction: <\itemize> If =\+>, then > is constant on ,\)=(\,\)> for some \\>. If =\+>, then > is constant on ,\)> for some \\>. If =\+*\>, then > is constant on ,\)> for some \\>. If =\+*\>, then > is constant on ,\)> for some \\>. If =\+>, then > is constant on ,\)> for some \\>. (In the cases =\->, =-> and so on, one has to interchange left and right continuity in the above list.) Modulo additive and multiplicative conjugations, it suffices to deal with the cases when =0> and =1>. We may also assume without loss of generality that we have made sufficiently many upward shiftings, so that the coefficients of are purely exponential. The next two lemmas deal with the first two cases. <\lemma> Let be a differential polynomial with coefficients in . Then f>)> has constant sign (\\)> for all sufficiently large \>. <\proof> This follows from (). <\lemma> Let be a differential polynomial with coefficients in . Then \>)> has constant sign (\\)> for all sufficiently small \\>>. <\proof> This is proved in a similar way as lemma . In order to deal with the remaining three cases, we may assume without loss of generality that <\equation> N(c)=Q(c)*(c)>, with \[c]> and \\> (by theorem and modulo at most P\<\|\|\>> upward shiftings). We will denote the multiplicity of as a root of by >. <\lemma> For all \\1> with \e>, the signs of )> and )> are independent of > and given by )>*(c->=()>*c+>=)>>(c).> <\proof> This follows from (), () and the fact that <\equation*> P=(\+O(ex>))*\ for some \0>, because the coefficients of are pure exponential . <\corollary> If is homogeneous of degree , then <\equation> ()=(\)=>(\>)=>(>), for all \\1> with \e>. <\corollary> Let \c> be constants such that (c+)*(c-)\0>. Then there exists a constant (c,c)> with (c-)*(c+)\0>. <\lemma> For all f\1> with e>, the signs of )> and are independent of and given by )>*>==sign Q.> <\proof> This is proved in a similar way as lemma . <\corollary> If is homogeneous of degree , then <\equation> ()=(f)=>(f>)=>(), for all f\1> with e>. <\corollary> Let > be a constant such that (c+)*()\0>. Then there exists a constant c> with (c-)*(c+)\0>. We can now state and sketch the proofs of the differential intermediate value theorem for generalized intervals. In fact, we simultaneously proved two theorems : the intermediate value theorem itself and a variant ``modulo x*\))>''. <\theorem> Let > and > be a transseries resp. a transmonomial in . Assume that changes sign on an open interval of one of the following forms: <\enumerate-alpha> ,\)>, for some \\> with (\-\)=\>. -*\,\)>. ,\+*\)>. -*\,\+*\)>. Then changes sign at some I>. <\theorem> Let > and \> be a transseries resp. a transmonomial in . Assume that changes sign on an open interval of one of the following forms: <\enumerate-alpha> +,\-)>, for some \\> with (\-\)=\>. -*\,\-)>. +,\+*\)>. -*\,\+*\)>. Then changes sign on ,f+)> for some I> with ,f+)\I>. <\proof> Using symmetry considerations and splitting up the interval in smaller parts, it is first shown that it suffices to consider case (). Then we may assume without loss of generality that =0> (modulo an additive conjugation of by >) and the theorem is proved by a triple induction over the order of , the Newton degree of the asymptotic algebraic differential equation (f\\)> and the maximal length of a shortest sequence of refinements like in proposition . If , it is not hard to improve proposition so that it yields a solution where a sign change occurs. If 1>, the lemmas and their corollaries from the previous section may be used in order to reduce the interval together with , or , so that we may conclude by induction. In the introduction, we mentioned the question of finding out which solutions of differential (or more general) equations may be modelled adequately using transseries. We know for instance (although we still have to write this down in detail) that the intermediate value theorem also holds for algebraic differential-difference equations, where the difference operators are post-compositions with transseries of exponentiality (this means that \g\exp\x> for all sufficiently large ; for instance, , , > and x>> have exponentiality , but not >). Of course, one has to allow well-ordered transseries in this case, but the exponential and logarithmic depths remain bounded. It would be interesting to know whether more general intermediate value theorems would follow from similar theorems for surreal numbers. Indeed, such theorems might be easier to prove for surreal numbers, because of the concept of the simplest surreal number which satisfies a certain property. In order to make this work, one would have to define a canonical derivation and composition for the surreal , where > plays the role of . From an effective point of view, proofs using surreal numbers would be less satisfying though. Also such proofs would not provide us with a method for finding generic solutions to differential equations in terms of integration constants. Yet another interesting setting for proving intermediate value theorems is model theory. The field of transseries satisfies some interesting axioms involving the ordered field operations, differentiation and the asymptotic relation >>. For instance, <\equation*> f\g\g\1\f\g. What differential Henselian property would be needed in order to prove intermediate value theorems in more general models of theories that contain axioms like the above one? Is it always possible to embed models of such theories into suitable generalizations of fields of transseries? We recently made some progress on this topic with Aschenbrenner and van den Dries. Another interesting problem is to prove the analytic counterparts of the intermediate value theorem and its generalizations in Écalle's setting of analyzable functions. We are confident that there should not be any major problems here, although the details still need to be worked out. So far, we have been working in the real setting, in absence of any oscillation. Another major problem is to generalize the theory to the complex setting. Some progress has been made in on this question: we showed how to construct fields of complex transseries on ``non degenerate regions'' and proved that any algebraic differential equation over such a field admits a solution. We also proved that linear differential equations admit a full system of solutions. In other words, the Picard-Vessiot extension of a field of complex transseries is isomorphic to the field itself. Unfortunately, the current fields of complex transseries are not differentially algebraically closed, since the only solutions to the elliptic equation <\equation*> f+(f)+f=0 are the solutions to <\equation*> f+f=0. The question of constructing a differentially algebraically closed field, which reflects the asymptotic behavior of solutions to algebraic differential equations, still remains open... <\bibliography|bib|alpha|~/publs/all.bib> <\bib-list|[99]> B. I. Dahn and P. Göring. Notes on exponential-logarithmic terms. , 127:45--50, 1986. J. Écalle. . Hermann, collection: Actualités mathématiques, 1992. G.H. Gonnet and D. Gruntz. Limit computation in computer algebra. Technical Report 187, ETH, Zürich, 1992. H. Hahn. Über die nichtarchimedischen Gröÿensysteme. , 116:601--655, 1907. B. Salvy. . PhD thesis, École Polytechnique, France, 1991. J. Shackell. Growth estimates for exp-log functions. , 10:611--632, 1990. J. van der Hoeven. . PhD thesis, École polytechnique, France, 1997. Joris van der Hoeven. A differential intermediate value theorem. Technical Report 2000-50, Univ. d'Orsay, 2000. Joris van der Hoeven. Complex transseries solutions to algebraic differential equations. Technical Report 2001-34, Univ. d'Orsay, 2001. <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> vdH:ivt vdH:osc DG86 Ec92 Sh90 Sal:phd GoGr92 vdH:phd vdH:phd vdH:ivt vdH:osc Hahn1907 DG86 Ec92 vdH:phd Ec92 vdH:phd vdH:ivt vdH:osc vdH:phd vdH:osc vdH:phd vdH:osc vdH:phd vdH:osc vdH:phd vdH:osc vdH:phd vdH:osc vdH:phd vdH:osc vdH:phd vdH:osc vdH:osc vdH:osc <\associate|toc> |math-font-series||1Introduction> |.>>>>|> |math-font-series||2The field of real, grid-based transseries> |.>>>>|> |2.1Generalized power series |.>>>>|> > |2.2Grid-based transseries in |x> |.>>>>|> > |2.2.1Logarithmic transseries |.>>>>|> > |2.2.2Exponential extensions |.>>>>|> > |2.2.3The field of grid-based transseries in |x> |.>>>>|> > |2.3Computations with transseries |.>>>>|> > |2.3.1Asymptotic bases |.>>>>|> > |2.3.2Transbases |.>>>>|> > |2.4Differentiation and composition |.>>>>|> > |2.4.1Differentiation |.>>>>|> > |2.4.2Composition |.>>>>|> > |math-font-series||3The differential Newton polygon method> |.>>>>|> |3.1Notations |.>>>>|> > |3.1.1Natural decomposition of |P> |.>>>>|> > |3.1.2Decomposition of |P> along orders |.>>>>|> > |3.1.3Logarithmic decomposition of |P> |.>>>>|> > |3.1.4Additive and multiplicative conjugations and upward shifting. |.>>>>|> > |3.2Potential dominant terms and Newton degree |.>>>>|> > |3.2.1Introduction |.>>>>|> > |3.2.2Potential dominant terms and Newton degree |.>>>>|> > |3.2.3Algebraic and mixed potential dominant monomials |.>>>>|> > |3.2.4Differential potential dominant monomials |.>>>>|> > |3.3Refinements |.>>>>|> > |3.3.1Reducing the Newton degree through refinements |.>>>>|> > |3.3.2Unravelling almost multiple solutions |.>>>>|> > |3.3.3The structure of solutions to algebraic differential equations |.>>>>|> > |math-font-series||4The intermediate value theorem> |.>>>>|> |4.1Introduction |.>>>>|> > |4.2Extending the sign function at non-standard points |.>>>>|> > |4.3Proof of the intermediate value theorem on generalized intervals |.>>>>|> > |math-font-series||5Perspectives> |.>>>>|> |math-font-series||Bibliography> |.>>>>|>