> <\body> ||<\author-address> de Mathématiques ( 425) Université Paris-Sud 91405 Orsay Cedex France >|> <\abstract> >>>>>>>>>Let be the field of grid-based transseries or the field of transseries with finite logarithmic depths. In our PhD. we announced that given a differential polynomial with coefficients in and transseries \\> with )\0> and )\0>, there exists an (\,\)>, such that . In this note, we will prove this theorem. Let be a totally ordered exp-log field. In chapter 2 of , we introduced the field =C[[[x]]]> of transseries in of finite logarithmic and exponential depths. In chapter 5, we then gave an (at least theoretical) algorithm to solve algebraic differential equations with coefficients in . By that time, the following theorem was already known to us (and stated in the conclusion), but due to lack of time, we had not been able to include the proof. <\theorem> Let be a differential polynomial with coefficients in . Given \\> in , such that )*P(\)\0>, there exists an (\,\)> with . In the theorem, ,\)> stands for the open interval between > and >. The proof that we will present in this note will be based on the differential Newton polygon method as described in chapter 5 of . We will freely use any results from there. We recall (and renew) some notations in section . In chapter 1 of , we also introduced the field of grid-based \C[[[x]]]> transseries in . In chapter 12, we have shown that our algorithm for solving algebraic differential equations preserves the grid-based property. Therefore, it is easily checked that theorem also holds for =C>. Similarly, it may be checked that the theorem holds if we take for the field of transseries of finite logarithmic depths (and possibly countable exponential depths). Assume that is a differential polynomial with coefficients in , which admits a sign change on a non empty interval ,\)> of transseries. The idea behind the proof of theorem is very simple: using the differential Newton polygon method, we shrink the interval ,\)> further and further while preserving the sign change property. Ultimately, we end up with an interval which is reduced to a point, which will then be seen to be a zero of . However, in order to apply the above idea, we will need to allow non standard intervals ,\)> in the proof. More precisely, > and > may generally be taken in the compactification of , as constructed in section 2.6 of . In this paper we will consider non standard > (resp. >) of the following forms: <\itemize> =\\>, with \\>; =\\>, with \\>; =\\*\>, with \\> and where > is a transmonomial. =\\*\>, with \\> and where > is a transmonomial. =\\>, with \\> and =(x*log x*log log x*\)>. Here > and > respectively designate the infinitely small and large constants >> and >> in the compactification of . Similarly, > and > designate the infinitely small and large constants > and > in the compactification of . We may then interpret > as a cut of the transline into two pieces ={f\\\|f\\}\{f\\\|f\\}>. Notice that <\eqnarray*> \\|f\}>||\\|\g\\:g\1\f=g};>>|\\|f\}>||\\|\g\\:g\1\f=g}.>>>> <\remark> Actually, the notations \>, \*\>, and so on are redundant. Indeed, \> does not depend on >, we have +*\=\+*\> whenever -\\\>, etc. Now consider a generalized interval ,\)>, where > and > may be as above. We have to give a precise meaning to the statement that admits a sign change on . This will be the main object of sections and . We will show there that, given a cut > of the above type, the function (f)=sign P(f)> may be prolongated by continuity into > from at least one direction: <\itemize> If =\+>, then > is constant on ,\)=(\,\)> for some \\>. If =\+>, then > is constant on ,\)> for some \\>. If =\+*\>, then > is constant on ,\)> for some \\>. If =\+*\>, then > is constant on ,\)> for some \\>. If =\+>, then > is constant on ,\)> for some \\>. (In the cases =\->, =-> and so on, one has to interchange left and right continuity in the above list.) Now we understand that admits a sign change on a generalized interval ,\)> if (\)*(\)\0>. <\eqnarray*> g>|>|>|g>|>|>|g>|>|log \|g\|;>>|g>|>|log \|g\|.>>>> <\eqnarray*> >>||/f;>>|i\>>||\\> times)>.>>>> .> <\equation> P(f)= > P>*f)> Here we use vector notation for tuples =(i,\,i)> and =(j,\,j)> of integers: <\equation*> |||||||||\|>|=>|>|\\>|>|\j\\\i\i;>>|>>||>*(f)>*\*(f)>;>>||\>>|||i>*\*|i>.>>>>> along orders.> <\equation> P(f)= > P]> f]> In this notation, > runs through tuples =(\,\,\)> of integers in ,r}> of length at most , and ]>=P(1)>,\,\(l)>]>> for all permutations of integers. We again use vector notation for such tuples <\equation*> |||||||||||\|>|=>|>|\\<\|\|\>>||+*\*+\\|>;>>|\\>|>||\|=\|\\|\\\\\\\\\|>\\\|>;>>>>>>|]>>||)>*\*f\|>)>;>>||\>>|||\>*\*\|>|\\|>>.>>>>> We call \>\|\| the of > and <\equation*> \<\|\|\>P\<\|\|\>=max\|P]>\0>\<\|\|\>\\<\|\|\> the of . <\eqnarray*> (f)>||>|h>(f)>||>|(f\)>||.>>>> Additive conjugation: <\equation> P>= \\> |\>*h-\>*P>. Multiplicative conjugation: <\equation> Ph,[\]>= \\>|\>*h- \]>*P]>. Upward shifting (compositional conjugation): <\equation> (P\)]>=\\>s,\>*e\\<\|\|\>*x>*(P]>\), where the ,\>> are generalized Stirling numbers of the first kind: <\equation*> |||||,\>>|=>|,\>*\*s\|>,\\|>>;>>|>|| s*x*f(log x).>>>>> > near zero and infinity> > near infinity> <\lemma> Let be a differential polynomial with coefficients in . Then f>)> has constant sign for all sufficiently large \>. <\proof> If , then the lemma is clear, so assume that 0>. Using the rules <\eqnarray*> ||>|>||>*f;>>|>||>)*f+f\>*f>*f;>>|>||>)*f+3*f\>*(f>)*f+(f\>)*f>*f+f\\>*f\>*f>*f;>>||>|>>> we may rewrite as an expression of the form =(i,\,i)>P\\>*f\\>,> where \\>\\> and \\>=f>*(f>)>*\*(fr\>)>> for each . Now consider the lexicographical ordering > on >, defined by <\eqnarray*> \\>|>|\j)\>>|||=j\i\j)\>>|||\>>|||=j\\\i=j\i\j).>>>> This ordering is total, so there exists a maximal for >, such that \\>\0>. Now let 1> be sufficiently large such that \>\exp x> for all . Then (f>)=(1>)>*sign P\\>> for all postive, infinitely large exp x>, since x\fr\>\\\f>\f> for all such . > near zero> <\lemma> Let be a differential polynomial with coefficients in . Then \>)> has constant sign for all sufficiently small \\>>. <\proof> If , then the lemma is clear. Assume that 0> and rewrite as in (). Now consider the twisted lexicographical ordering > on >, defined by <\eqnarray*> \\>|>|\j)\>>|||=j\i\j)\>>|||\>>|||=j\\\i=j\i\j).>>>> This ordering is total, so there exists a maximal for >, such that \\>\0>. If 1> is sufficiently large such that \>\exp x> for all , then <\equation> (\>)=(1>)>*sign P> for all postive infinitesimal \exp x>. Assume that has purely exponential coefficients. In what follows, we will denote by >> the associated to a monomial >, i.e. <\equation> N>(c)=>P\,\,\(P\>)>*c>, where <\equation> \=max,\> \>>. The following theorem shows how =N> looks like after sufficiently many upward shiftings: <\theorem> Let be a differential polynomial with purely exponential coefficients. Then there exists a polynomial C[c]> and an integer >, such that for all \<\|\|\>P\<\|\|\>>, we have >=Q*(c)>>. <\proof> Let > be minimal, such that there exists an > with \\<\|\|\>=\> and \)]>\0>. Then we have (N\)=e*x>> and >(c)=\\<\|\|\>=\>\\>s,\>*N]>*c]>,> by formula (). Since >\0>, we must have \\<\|\|\>N\<\|\|\>>. Consequently, N\<\|\|\>\\=\<\|\|\>N>\<\|\|\>\\<\|\|\>N\>\<\|\|\>\\>. Hence, for some >P\<\|\|\>>, we have N>\<\|\|\>=\<\|\|\>N>\<\|\|\>>. But then () applied on > instead of yields >=N>>. This shows that >> is independent of , for \<\|\|\>P\<\|\|\>>. In order to prove the theorem, it now suffices to show that >=N> implies >=Q*(c)>> for some polynomial C[c]>. For all differential polynomials of homogeneous weight >, let <\equation> R>=([c*(c)>]R)*c*(c)>. Since >>=N>>, it suffices to show that whenever >=0>. Now >=0> implies that (x)=0>. Furthermore, () yields <\equation> N\=e*x>*N. Consequently, we also have (e)=e*x>*(N\)(e)=e*x>*(N(x))\=0>. By induction, it follows that (exp x)=0> for any iterated exponential of . We conclude that =P=0>, by the lemma . <\remark> Given any differential polynomial with coefficients in , this polynomial becomes purely exponential after sufficiently many upward shiftings. After at most P\<\|\|\>> more upward shiftings, the purely exponential Newton polynomial stabilizes. The resulting purely exponential differential Newton polynomial, which is in )>>, is called the of . > near constants> In the previous section, we have seen how to compute \)> and \)> for all \\>. In this section, we show how to compute \*\)> and \*\)> for all \\> and all transmonomials >. Modulo an additive and a multiplicative conjugation with > resp. >, we may assume without loss of generality that =0> and =1>. Hence it will suffice to study the behaviour of (c\\)> for C> and positive infinitesimal (but sufficiently large) >, as well as the behaviour of (f)> for positive infinitely large (but sufficiently small) . Modulo suffiently upward shiftings (we have (c+\)=>(c+\\)> and (f)=>(f\)>), we may assume that has purely exponential coefficients. By theorem and modulo at most P\<\|\|\>> more upward shiftings, we may also assume that <\equation> N(c)=Q(c)*(c)>, for some polynomial C[c]> and \>. We will denote by > the multiplicity of as a root of . Finally, modulo division of by its dominant monomial (this does not alter >), we may assume without loss of generality that =1>. > in between constants> <\lemma> For all \\1> with \e>, the signs of )> and )> are independent of > and given by )>*(c->=()>*c+>=)>>(c).> <\proof> Since is purely exponential and =1>, there exists an \0> such that )-N(c+\)\e*x>> for all \1>. Let \0> be such that *x>\\\1>, where =\/(\+\)>. Then \)\!>*Q)>(c)*(\>)>>, whence *\*x>\Q(c+\)\1.> Furthermore, *e*x>*\\\->, whence *\*x>\(\)>\>.> Put together, () and () imply that (c)\e*x>>. Hence (c+\)=>(c+\)>, by (). Now (c\\)=(c\\)*sign ((c\\))>=(1>)>*)>>(c)*(1>)>,> since \0> for all positive infinitesimal >. <\corollary> If is homogeneous of degree , then <\equation> ()=(\)=>(\>)=>(>), for all \\1> with \e>. <\corollary> Let \c> be constants such that (c+)*(c-)\0>. Then there exists a constant (c,c)> with (c-)*(c+)\0>. <\proof> In the case when > is odd, then (c-)*(c+)\0> holds for any c> with 0>, by (). Assume therefore that > is even and let ,\> denote the multiplicities of ,c> as roots of . From () we deduce that <\equation> (-1)>*)>>(c)*)>>(c)\0. In other words, the signs of for c> and c> are different. Hence, there exists a root of between > and > which has odd multiplicity >. For this root , () again implies that (c-)*(c+)\0>. > before and after the constants> <\lemma> For all f\1> with e>, the signs of )> and are independent of and given by )>*>==sign Q.> <\proof> Since is purely exponential and =1>, there exists an \0> such that (f)\e*x>,> since ,f,\\e>. Furthermore f>)\Q*(f>)\e> and f>)>\e>, whence (f)\e>. In particular, (f)\e*x>>, so that (f)=>(f)>, by (). Now <\equation> (f>)=(\>)*sign (f>)>=sign Q*(1>)>, since \0> for positive infinitely large . <\corollary> If is homogeneous of degree , then <\equation> ()=(f)=>(f>)=>(), for all f\1> with e>. <\corollary> Let > be a constant such that (c+)*()\0>. Then there exists a constant c> with (c-)*(c+)\0>. <\proof> In the case when > is odd, then (c-)*(c+)\0> holds for any c> with 0>, by (). Assume therefore that > is even and let > be the multiplicity of > as a root of . From () and () we deduce that <\equation> )>>(c)*sign Q\0. In other words, the signs of for c> and > are different. Hence, there exists a root c> of which has odd multiplicity >. For this root , () implies that (c-)*(c+)\0>. It is convenient to prove the following generalizations of theorem . <\theorem> Let > and > be a transseries resp. a transmonomial in . Assume that changes sign on an open interval of one of the following forms: <\enumerate-alpha> ,\)>, for some \\> with (\-\)=\>. -*\,\)>. ,\+*\)>. -*\,\+*\)>. Then changes sign at some I>. <\theorem> Let > and \> be a transseries resp. a transmonomial in . Assume that changes sign on an open interval of one of the following forms: <\enumerate-alpha> +,\-)>, for some \\> with (\-\)=\>. -*\,\-)>. +,\+*\)>. -*\,\+*\)>. Then changes sign on ,f+)> for some I> with ,f+)\I>. <\proof> Let us first show that cases , and may all be reduced to case . We will show this in the case of theorem ; the proof is similar in the case of theorem . Let us first show that case may be reduced to cases , and . Indeed, if changes sign on ,\)>, then changes sign on ,\+*\)>, +*\,\-*\)> or -*\,\)>. In the second case, modulo a multiplicative conjugation and upward shifting, corollary implies that there exists a \\(\-\)>> such that admits a sign change on +\*\)-*\,(\+\*\)+*\)>. Similarly, case may be reduced to cases and by splitting the interval in two parts. Finally, cases and are symmetric when replacing by )>. Without loss of generality we may assume that =0>, modulo an additive conjugation of by >. We prove the theorem by a triple induction over the order of , the Newton degree of the asymptotic algebraic differential equation (f\\)> and the maximal length of a sequence of privileged refinements of Newton degree (we have (r+1)>, by proposition 5.12 in ). Let us show that, modulo upward shiftings, we may assume without loss of generality that and > are purely exponential and that \C[c]*(c)>>. In the case of theorem , we indeed have >(0)=(0)> and >(*\\)=(*\)>. In the case of theorem , we also have e>>()=>(\)=()>. Furthermore, if (,*\\*e)=I\*e> is such that *e> changes sign on ,f+)\I\*e>, then /x\(\,*\)=I> is such that changes sign on /x-,f\/x+)\I>. ) is quasi-linear.> Let > be the potential dominant monomial relative to (). We may assume without loss of generality that =1>, modulo a multiplicative conjugation with >. Since By \C[c]*(c)>>, we have =\*c+\> or =\*c> for certain constants ,\\C>. In the case when =\*c+\>, there exists a solution to () with -\/\\0>. Now (0)=sign \> and ()=sign \>. We claim that ()=\>()> and >(\>-)=\(*\)> must be equal. Otherwise > would admit a solution between > and >->, by the induction hypothesis. But then the potential dominant monomial relative to () should have been \>>, if > is the largest such solution. Our claim implies that )*(sign \)=(0)*(*\)\0>, so that 0>. Finally, lemma implies that admits a sign-change at . Lemma also shows that (f-)*(f+)=(f-)*(f+)\0>. In the case when =\*c>, then any constant \C> is a root of >. Hence, for each \0>, there exists a solution to () with \>. Again by lemmas and , it follows that admits a sign change at and on ,f+)>. 1>.> Let > be the largest classical potential dominant monomial relative to (). Since (0)*(*\)\0> (resp. ()*(*\)\0>), one of the following always holds: <\description> We have (0)*(*\)\0> (resp. ()*(*\)\0>). We have (*\)*(*\)\0>. We have (*\)*(*\)\0>. For the proof of theorem , we also assume that \> in the above three cases and distinguish a last in which \>. We are directly done by the induction hypothesis, since the equation (f\\).> has a strictly smaller Newton degree than (). Modulo multiplicative conjugation with >, we may assume without loss of generality that =1>. By corollary , there exists a 0> such that (c-)*(c+)\0>. Actually, for any transseries \c> we then have (\-)*(\+)\0>. Take > such that >()=0(\1)> is a privileged refinement of (). Then either the Newton degree of () is strictly less than , or the longest chain of refinements of () of Newton degree is strictly less than . We conclude by the induction hypothesis. Since > is the largest classical dominant monomial relative to (), the degree of the Newton polynomial associated to any monomial between > and > must be . Consequently, <\equation> (*\)*(*\)=>(*\)*>(*\)=>(\>+)*>(\>-)\0. By the induction hypothesis, there exists a monomial > with >+\\>\\>-> and <\equation> >(\>-)*>(\>+)\0. In other words, > is a dominant monomial, such that \\\\> and <\equation> >(*\)*>(*\)\0. We conclude by the same argument as in case 2b, where we let > play the role of >. Since \> is the largest classical dominant monomial relative to (), the degree of the Newton polynomial associated to any monomial between > and > must be . Consequently, <\equation> ()*(*\)=>()*>(*\)=>(x>+)*>(\>-)\0. By the induction hypothesis, there exists a monomial > with >+\\>\\>-> and <\equation> >(\>-)*>(\>+)\0. In other words, > is a dominant monomial, such that \x\\\\> and <\equation> >(*\)*>(*\)\0. We again conclude by the same argument as in case 2b. <\corollary> Any differential polynomial of odd degree and with coefficients in admits a root in . <\proof> Let be a polynomial of odd degree with coefficients in . Then formula () shows that for sufficiently large \>> we have (-f)*(f)\0>, since > is odd in this formula. We now apply the intermediate value theorem between and . <\bibliography|bib|alpha|~/publs/all.bib> <\bib-list|[99]> J. van der Hoeven. . PhD thesis, École polytechnique, France, 1997. \; <\initial> <\collection> <\references> <\collection> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > <\auxiliary> <\collection> <\associate|bib> vdH:phd vdH:phd vdH:phd vdH:phd vdH:phd <\associate|toc> |math-font-series||1Introduction> |.>>>>|> |1.1Statement of the results |.>>>>|> > |1.2Proof strategy |.>>>>|> > |math-font-series||2List of notations> |.>>>>|> |Asymptotic relations. |.>>>>|> > |Logarithmic derivatives. |.>>>>|> > |Natural decomposition of |P>. |.>>>>|> > |Decomposition of |P> along orders. |.>>>>|> > |Additive, multiplicative and compositional conjugations or upward shifting. |.>>>>|> > |math-font-series||3Behaviour of |\> near zero and infinity> |.>>>>|> |3.1Behaviour of |\> near infinity |.>>>>|> > |3.2Behaviour of |\> near zero |.>>>>|> > |3.3Canonical form of differential Newton polynomials |.>>>>|> > |math-font-series||4Behaviour of |\> near constants> |.>>>>|> |4.1Behaviour of |\> in between constants |.>>>>|> > |4.2Behaviour of |\> before and after the constants |.>>>>|> > |math-font-series||5Proof of the intermediate value theorem> |.>>>>|> |math-font-series||Bibliography> |.>>>>|>