return

Celebratio Mathematica

S. R. Srinivasa Varadhan

Large deviation and homogenization

by Fraydoun Rezakhanlou

The main theme of this ex­pos­it­ory art­icle is to re­view some of Raghu Varadhan and his col­lab­or­at­ors’ con­tri­bu­tions to the ques­tion of ho­mo­gen­iz­a­tion for the fol­low­ing stochast­ic mod­els:

  • sta­tion­ary Hamilton–Jac­obi (HJ) and Hami­lon–Jac­obi–Bell­man (HJB) equa­tion;
  • ran­dom walk in ran­dom en­vir­on­ment (RWRE);
  • simple ex­clu­sion pro­cess (SEP).

All the above mod­els share sim­il­ar scal­ing be­ha­vi­ors and, in some sense, rep­res­ent evolving height func­tions which are gov­erned by loc­al and ran­dom growth rules. In fact, the law of a RWRE sat­is­fies an equa­tion which re­sembles a dis­crete HJB equa­tion, and the growth rates of the particle cur­rents in SEP are de­scribed by a non­lin­ear func­tion of the height dif­fer­ences. Re­view­ing Raghu Varadhan’s fun­da­ment­al con­tri­bu­tions sheds light on some uni­ver­sal be­ha­vi­or of stochast­ic growth mod­els.

The Hamilton–Jacobi and Hamilon–Jacobi–Bellman equations

To in­tro­duce the ba­sic idea be­hind ho­mo­gen­iz­a­tion, we first con­sider the (in­homo­gen­eous) Hamilton–Jac­obi (HJ) equa­tion, (1)ut=H(x,ux), where H is sta­tion­ary and er­god­ic in the first vari­able x. More pre­cisely, we have a prob­ab­il­ity space (Ω,F,P) with F a Borel σ-field on Ω and P a prob­ab­il­ity meas­ure on (Ω,F), which is in­vari­ant with re­spect to a fam­ily of trans­la­tion op­er­at­ors; that is, for every xRd, there ex­ists a meas­ur­able func­tion τx:ΩΩ so that τxτy=τx+y, and P(τxA)=P(A) for every AF and x,yRd. We also as­sume that τx is er­god­ic; that is, τxA=A for all xRd im­plies that either P(A)=1 or 0.

Now, H(x,p,ω)=H0(τxω,p) where H0:Ω×RdR is a meas­ur­able func­tion. We think of (x,t,u) as the mi­cro­scop­ic co­ordin­ates, with the graph of u(,t) rep­res­ent­ing a ran­dom in­ter­face. To switch to mac­ro­scop­ic co­ordin­ates, we set (2)uε(x,t;ω)=εu(xε,tε;ω). We now have (3)utε=H(xε,uxε). We note that the right-hand side of (3) fluc­tu­ates greatly over mac­ro­scop­ic shifts in the po­s­i­tion x. The huge fluc­tu­ation in H, though, does not ne­ces­sar­ily im­ply cor­res­pond­ingly huge fluc­tu­ations in uε. This is the ho­mo­gen­iz­a­tion phe­nomen­on; that is, we ex­pect uεu¯ as ε0, with u¯ solv­ing a ho­mo­gen­ized HJ equa­tion (4)u¯t=H¯(u¯x), where H¯:RdR is the ho­mo­gen­ized Hamilto­ni­an and does not de­pend on ω.

As our second ex­ample, we con­sider the Hamilton–Jac­obi–Bell­mann equa­tion (5)ut=H(x,ux)+12Δu, with H(x,p)=H(x,p,ω) as be­fore. We define uε as in (2), and then (3) be­comes (6)utε=H(xε,uxε)+ε2Δuε. Again, we ex­pect to have uεu¯, with u¯ sat­is­fy­ing an equa­tion of the form (4) for a dif­fer­ent ho­mo­gen­ized Hamilto­ni­an H¯. In­deed, the ho­mo­gen­iz­a­tions for both (3) and (6) have been achieved by Sou­gan­id­is [e4], Reza­khan­lou and Tarv­er [e6], Lions and Sou­gan­id­is [e11], and Kosy­gina, Reza­khan­lou and Varadhan [5], provided that H(x,p) is con­vex in p and sat­is­fies suit­able tech­nic­al as­sump­tions on which we do not elab­or­ate here. (See also Kosy­gina and Varadhan [8] when H is al­lowed to de­pend on the time vari­able.) Not­ably, [5] ob­tains a vari­ation­al for­mula for H¯. In the case of (6), H¯ is giv­en by (7)H¯(p)=infgesssupω[H0(p+g(ω),ω)+12g(ω)] where the es­sen­tial su­prem­um is taken with re­spect to the prob­ab­il­ity meas­ure P, and the in­fim­um is taken over func­tions g:ΩRd such that Eg=0 and g=0 weakly. Here, is the gen­er­at­or of the group {τx}; that is, (8)f(ω)v=limt01t(f(τtvω)f(ω)) whenev­er the lim­it ex­ists. We ex­pect a sim­il­ar for­mula to hold in the case of (3), namely, (9)H¯(p)=infgesssupwH0(p+g(ω),ω). Be­fore we turn to our next mod­el, we make an ob­ser­va­tion re­gard­ing the ho­mo­gen­iz­a­tion of (6). Note that, if (10)H(x,p,ω)=12|p|2+b(x,ω)p+V(x,ω) and u is a solu­tion of (5), then, by the Hopf–Cole trans­form, the func­tion w=eu solves (11)wt=12Δw+b(x,ω)w+V(x,ω)w(x,ω). By the Feyn­mann–Kac for­mula, there is a prob­ab­il­ist­ic rep­res­ent­a­tion for w us­ing a dif­fu­sion with a drift b. More pre­cisely, if X(t,x;ω) de­notes the solu­tion to (12)dX(t)=b(X(t),ω)dt+dβ(t),X(0)=x, then (13)w(x,t;ω)=Eωw(X(t,x;ω),0)exp(0tV(X(s,x;ω),ω)ds). Here, β is a stand­ard Browni­an mo­tion, and Eω de­notes the ex­pec­ted value for the pro­cess X(t). The func­tion V is the po­ten­tial and, if V0, then V may be in­ter­preted as a killing rate for the dif­fu­sion X. With this in­ter­pret­a­tion, w(x,t;ω) is the ex­pec­ted value of w(X^(t),0), with X^ de­not­ing the dif­fu­sion with the killing. We now would like to use our prob­ab­il­ist­ic rep­res­ent­a­tion to re­write uε. If (14)uε(x,0;ω)=f(x) for a de­term­in­ist­ic ini­tial con­di­tion f, then (15)uε(x,t;ω)=εlogEωexp[ε1f(εX(t/ε,x/ε;ω))+0t/εV(X(s,x/ε;ω),ω)ds] In par­tic­u­lar, (16)uε(0,1;ω)=εlogEωexp[ε1f(X(ε1;ω))+0ε1V(X(s;ω),ω)ds] where X(s;ω):=X(s,0;ω) is the dif­fu­sion start­ing from the ori­gin. On the oth­er hand, since H¯ is con­vex (which is evid­ent from (7)) we may use the Hope–Lax–Olein­ik for­mula to write (17)u¯(x,t)=supy(f(y)tL¯(yxt)), where L¯ is the con­vex con­jug­ate of H¯. In par­tic­u­lar, (18)limε0uε(0,1;ω)=u¯(0,1)=supy(f(y)L¯(y)). By a cel­eb­rated lemma of Varadhan, (18) is equi­val­ent to say­ing that, for al­most all ω, the dif­fu­sion X^ sat­is­fies a large-de­vi­ation prin­ciple with rate func­tion L¯. When b0 and (19)V(x,ω)=jIV0(xxj), with ω={xj:jI} be­ing a Pois­son point pro­cess and V0 a con­tinu­ous func­tion of com­pact sup­port, the large-de­vi­ation prin­ciple for X^ was earli­er es­tab­lished by Szn­it­man [e2].

In words, the large-de­vi­ation prin­ciple for the dif­fu­sion X^(;ω) is equi­val­ent to ho­mo­gen­iz­a­tion for the equa­tion (6). Write Pω for the law of the pro­cess X^(;ω). What we have in (18) is an ex­ample of a quenched large-de­vi­ation prin­ciple. We may also con­sider the an­nealed law (20)P¯=PωP(dω) and won­der wheth­er an an­nealed large-de­vi­ation prin­ciple is true for the pro­cess X^. More pre­cisely, wheth­er or not limε0εlogEωexp[ε1f(X(ε1,ω))(21)+0ε1V(X(s,ω),ω)ds]P(dω)=supy(f(y)J(y)) for a suit­able rate func­tion J. In terms of uε, this is equi­val­ent to say­ing (22)limε0εlogeε1uε(0,1;ω)P(dω)=supy(f(y)J(y)). This would fol­low if we can es­tab­lish a large-de­vi­ation prin­ciple for the con­ver­gence of uε to u¯. That is, if we can find a func­tion Kf(y;x,t) such that (23)limε0εlogeε1λuε(x,t;ω)P(dω)=supy(λyKf(y;x,t)). The an­nealed large de­vi­ation (22) in the case b0 and V from (19) can be found in the manuscript [e4], but (23) re­mains open even when b=0.

It is worth men­tion­ing that there is also a vari­ation­al de­scrip­tion for the large-de­vi­ation rate func­tion L¯, namely (24)L¯(v)=infainfμΓa,vL0(ω,a(ω))μ(dω), where L0(ω,v) is the con­vex con­jug­ate of H0(ω,p) and Γa,v is the set of in­vari­ant meas­ures for the dif­fu­sions Aa=a(ω)+12Δwitha(ω)μ(dω)=v. In the case of (3), the gen­er­at­or Aa takes the form a and, when H is peri­od­ic in x (that is, when Ω is the d-di­men­sion­al tor­us with P be­ing the uni­form meas­ure), the for­mula (24) is equi­val­ent to a for­mula of Math­er for the av­er­aged Lag­rangi­an and our ho­mo­gen­iz­a­tion is closely re­lated to the weak KAM the­ory. See Fathi and Maderna [e13] and Evans and Gomez [e8] for more de­tails.

The random walk in a random environment

As our second class of ex­amples, we con­sider a dis­crete ver­sion of the dif­fu­sion (12). This is simply a ran­dom walk in a ran­dom en­vir­on­ment (RWRE). To this end, let us write P for the space of prob­ab­il­ity dens­it­ies on the d-di­men­sion­al lat­tice Zd; that is, pP if p:Zd[0,1] with zp(z)=1. We set Ω=PZd, and ωΩ is writ­ten as ω=(pa:aZd). Giv­en ωΩ, we write X(n,a;ω) to de­note a ran­dom walk at time n with start­ing point aZd and trans­ition prob­ab­il­it­ies pa,aZd. More pre­cisely, Pω(X(n+1)=y|X(n)=x)=px(yx). Giv­en a func­tion g:ZdR, we write Tng(x)=Eωg(X(n,x;ω)), so that T1g(x)=yZdg(y)px(yx). To com­pare with (11) in the case V0, we also write w(x,n)=Tng(x) for a giv­en ini­tial g. This trivi­ally solves w(x,n+1)=(T1w(,n))(x). To com­pare with (5), we set u=logw so that u(x,n+1)u(x,n)=(Au(,n))(x) where Ag(x)=logT1eg(x)g(x)=logzeg(x+z)g(x)px(z). Now, ho­mo­gen­iz­a­tion means that we are in­ter­ested in u¯(x,t)=limε0εu([xε],[tε];ω), provided that ω is dis­trib­uted ac­cord­ing to an er­god­ic sta­tion­ary prob­ab­il­ity meas­ure P, where τxω=(py+x:yZd). (Here, [a] de­notes the in­teger part of a.)

Again, u¯ solves (4) provided that limε0uε(x,0)=u¯(x,0)=f(x) ex­ists ini­tially. The func­tion L¯ (the con­vex con­jug­ate of H¯) is the quenched large-de­vi­ation rate func­tion for X(n;ω). More pre­cisely, for any bounded con­tinu­ous f:RdR, limnn1logEef(n1X(n,0;ω))=supy(f(y)L¯(y)). This has been es­tab­lished un­der an el­lipt­i­city con­di­tion on px by Varadhan [3]. See Bolthausen and Szn­it­man [e9] for a sur­vey on earli­er res­ults. The ana­log of (7) is the fol­low­ing for­mula of Rosen­bluth [e12]: (25)H¯(p)=infgesssupωyp0(y)epz+g(ω,z) with in­fim­um over func­tions (g(,z):ΩR:zZd) such that Eg(,z)=0 and g is a “closed 1-form”. By the lat­ter we mean that, for every loop x0,x1,,xk1,xk=x0, we have that r=0k1g(τxrω,xk+1xk)=0.

We now turn to the an­nealed large de­vi­ations for a RWRE. For this, we need to se­lect a tract­able law for the en­vir­on­ment. Pick a prob­ab­il­ity meas­ure β on P and set P to be the product of β to ob­tain a law on PZd. The an­nealed meas­ure P¯=PωP(dω) has a simple de­scrip­tion. For this, we write Z(n)=X(n+1)X(n) for the jump the walk per­forms at time n. We also define Nx,z(n)=#{i{0,1,2,,n}:X(i)=x,Z(i)=z}. We cer­tainly have P(X(1;ω)=x1,,X(n,ω)=xn)=z,xZd(px(z))Nx,z(n),P¯(X(1)=x1,,X(n)=xn)=xZdz(p(z))Nx,z(n)β(dp), where now Nx,z(n)=#{i{0,1,,n1}:xi=x,xi+1xi=z}. Evid­ently, P¯ is the law of a non-Markovi­an walk in Zd. Varadhan [3] es­tab­lished the an­nealed large-de­vi­ations prin­ciple un­der a suit­able el­lipt­i­city con­di­tion on β. The meth­od re­lies on the fact that the en­vir­on­ment seen from the walk­er is a Markov pro­cess for which Don­sker–Varadhan the­ory may ap­ply if we have enough con­trol on the trans­ition prob­ab­il­it­ies.

If we set Wn=(0X(n),X(1)X(n),,X(n1)X(n),X(n)X(n))=(sn,,s1,s0=0) for the chain seen from the loc­a­tion X(n), then we ob­tain a walk of length n that ends at 0. The space of such walks is de­noted by Wn. Un­der the law P¯, the se­quence W1,W2, is a Markov chain with the fol­low­ing rule: (26)P¯(Wn+1=TzWnWn)=P¯(TzWn)P¯(Wn)=Pp(z)ap(a)N0,aβ(dp)Pap(a)N0,aβ(dp), where N0,a=N0,a(Wn) is the num­ber of jumps of size a from 0 for the walk Wn. Here, TzWn de­notes a walk of size n+1 which is formed by trans­lat­ing the walk Wn by z so that it ends at z in­stead of 0, and then mak­ing the new jump of size z so that it ends at 0.

We wish to es­tab­lish a large-de­vi­ation prin­ciple for the Markov chain with trans­ition prob­ab­il­ity q(W,z) giv­en by (26), where W=Wnm=0Wm and z is the jump size. We as­sume that, with prob­ab­il­ity one, the sup­port of p0() is con­tained in the set D={z:|z|C0}. Nat­ur­ally, q ex­tends to those in­fin­ite walks WW with N0,a< for every aD. If we let Wtr de­note the set of tran­si­ent walks, then the ex­pres­sion q(W,z)=q(W,TzW) giv­en by (26) defines the trans­ition prob­ab­il­ity for a Markov chain in Wtr. Don­sker–Varadhan the­ory sug­gests that the em­pir­ic­al meas­ure 1nm=0n1δWm sat­is­fies a large-de­vi­ation prin­ciple with a rate func­tion I(μ)=Wtrqμ(W,z)logqμ(W,z)q(W,z)μ(dW), where μ is any T-in­vari­ant meas­ure on Wtr, and qμ(W,z) is the con­di­tion­al prob­ab­il­ity of a jump of size z, giv­en the past his­tory. We then use the con­trac­tion prin­ciple to come up with a can­did­ate for the large-de­vi­ation rate func­tion H(v)=inf{I(μ):z0μ(dW)=v}, where (zj:jZ) de­notes the jumps of a walk W. Sev­er­al tech­nic­al dif­fi­culties arise as one tries to ap­ply Don­sker–Varadhan the­ory, be­cause of the non-com­pact­ness of the state space and the fact that the trans­ition prob­ab­il­it­ies are not con­tinu­ous. These is­sues are handled mas­ter­fully in [3].

The simple exclusion process

We now turn to our fi­nal mod­el. This time, our en­vir­on­ment ω=(pi(t):iZ) is a col­lec­tion of in­de­pend­ent Pois­son clocks. More pre­cisely, pi with iZ are in­de­pend­ent, and each pi is a Pois­son pro­cess of rate 1; pi(t)=k for t[τ1i++τki,τ1i++τk+1i) with τji in­de­pend­ent mean-1 ex­po­nen­tial ran­dom vari­ables. Giv­en a real­iz­a­tion of ω and an ini­tial height func­tion h0Γ={h:ZZ0h(i+1)h(i)1}, we con­struct h(i,t)=h(i,t;ω) such that h(,t;ω)Γ for all t. More pre­cisely, at each Pois­son time t=τ1i++τki, the height h(i,t) in­creases by one unit provided that the res­ult­ing height func­tion hi be­longs to Γ; oth­er­wise, the in­crease is sup­pressed.

The pro­cess h(,t) is a Markov pro­cess with the rule hhi with rate η(i+1)(1η(i)), where η(i)=h(i)h(i1). The pro­cess (η(i,t;ω):iZ) is also Markovi­an, with the in­ter­pret­a­tion that η(i,t)=1 if the site i is oc­cu­pied by a particle, and η(i,t)=0 if the site i is va­cant. Now, the growth hhi is equi­val­ent to jump­ing a particle from site i+1 to i, provided that the site i is va­cant. Since hΓ is non­decreas­ing, we may define its in­verse xΓ, where Γ={x:ZZx(h+1)>x(h)}. Since h in­creases at a site i+1 if the site i is oc­cu­pied by a particle, we may re­gard x(h) as the po­s­i­tion of a particle of la­bel h. Equi­val­ently, we may in­ter­pret h(i) as the la­bel of a particle at an oc­cu­pied site i.

The pro­cess x(h,t;ω) is also a Markov pro­cess with the rule x(h,t)x(h,t)1 with rate 11(x(h,t)x(h1,t)>1). In words, x(h) de­creases by one unit with rate 1, provided that the res­ult­ing con­fig­ur­a­tion xh is still in Γ. For the con­struc­tion of x(h,t;ω) we may use the clocks ω or, equi­val­ently, we may use clocks that are as­signed to sites hZ. More pre­cisely, if ω=(ph(t):hZ) is a col­lec­tion of in­de­pend­ent Pois­son pro­cesses of rate 1, then we de­crease x(h) by one unit when the clock ph rings. The pro­cesses x(h,t;ω) and x(h,t;ω) have the same dis­tri­bu­tion. If we define ζ(h,t)=x(h,t)x(h1,t)1, then ζ(h,t) rep­res­ents the gap between the h-th and (h1)-th particles in the ex­clu­sion pro­cess. The pro­cess (ζ(h,t):hZ) is the cel­eb­rated zero-range pro­cess and can be re­garded as the oc­cu­pa­tion num­ber at site h. The ζ-pro­cess is also Markovi­an, where a ζ-particle at site h jumps to site h+1 with rate 11(ζ(h)>0).

As in the pre­vi­ous sec­tions, we set uε(x,t;ω)=εh([xε],tε;ω),xε(u,t;ω)=εx([uε],tε;ω), and as a ho­mo­gen­iz­a­tion we ex­pect to have uεu¯, xεx¯, with u¯ and x¯ de­term­in­ist­ic solu­tions to Hamilton–Jac­obi equa­tions (27)u¯t=H¯1(u¯x)=u¯x(1u¯x),(28)x¯t=H¯2(x¯u)=1x¯u1. (See [e1].) As for the large de­vi­ations, we will be in­ter­ested in (29)limε0εlogP(uε(x,t)u)=limε0εlogP(xε(u,t)x)=:W(x,u,t). Evid­ently, W(x,u,t)=0 if uu¯(x,t) or xx¯(u,t). However, we have that W(x,u,t)>0 whenev­er u>u¯(x,t) or x>x¯(u,t). As it turns out, (30)limε0εlogP(uε(x,t)u)= for u<u¯(x,t) be­cause, for such a num­ber u, lim infε0ε2logP(uε(x,t)u)>0, as was demon­strated by Jensen and Varadhan [1] (see also [e7] and [3]). Quot­ing from [1], the state­ment (29) has to do with the fact that one may slow down x(h,t) for hh0 in a time in­ter­val of or­der O(ε1) by simply slow­ing down x(h0,t). This can be achieved for an en­tropy price of or­der O(ε1). However, for xε(u,t)x¯(u,t)δ, with δ>0, we need to speed up O(ε1)-many particles for a time in­ter­val of or­der O(ε1). This re­quires an en­tropy price of or­der O(ε2).

As was ob­served by Sep­päläin­en [e5], both the h and x pro­cesses en­joy a strong mono­ton­icity prop­erty. More pre­cisely, if we write x(h,t;ω)=Ttωx0(h) for the x-pro­cess start­ing from the ini­tial con­fig­ur­a­tion x0Γ, then Ttω(supαxα0)=supαTtω(xα0). In words, if the ini­tial height x0=supαxα0 is the su­prem­um of a fam­ily of height func­tions xα0, then it suf­fices to evolve each xα0 sep­ar­ately for a giv­en real­iz­a­tion of ω, and take the su­prem­um af­ter­wards. From this, it is not hard to show that such a strong mono­ton­icity must be val­id for W and this, in turn, im­plies that W solves a HJ equa­tion of the form (31)Wt=K(Wx,Wu). Here, the ini­tial data W(x,u,0) is the large-de­vi­ation rate func­tion at ini­tial time. Of course we as­sume that there is a large-de­vi­ation rate func­tion ini­tially, and would like to de­rive a large-de­vi­ation prin­ciple at later times. In the case of the ex­clu­sion or zero-range pro­cess, it is not hard to guess what K is, be­cause, when the pro­cess is at equi­lib­ri­um, the height func­tion at a giv­en site has a simple de­scrip­tion. To con­struct the equi­lib­ri­um meas­ures for the x-pro­cess, we pick a num­ber b(0,1) and define a ran­dom ini­tial-height func­tion x(,0) by the re­quire­ment that x(,0)=0 and that (x(h+1,0)x(h,0)1:hZ) are in­de­pend­ent geo­met­ric ran­dom vari­ables of para­met­er b. That is, x(h+1,0)x(h,0)=k+1 with prob­ab­il­ity (1b)bk. Let us write Pb for the law of the cor­res­pond­ing pro­cess x(h,t;ω). Us­ing Cramer’s large-de­vi­ation the­or­em, we can read­ily cal­cu­late that, for u pos­it­ive, (32)W(x,u,0)=limε0εlogPb(xε(u,0)x)=u(I1(xu+1,b))+ where I1(r,b)=rlog[r/(b(1+r))]log[(1b)(1+r)]. As is well known (see for ex­ample Chapter VIII, Co­rol­lary 4.9 of Lig­gett [e10]), x(0,t) is a Pois­son pro­cess which de­creases one unit with rate b. Again Cramer’s the­or­em yields (33)W(x,0,t)=bt(I2(xbt))+ where I2(r)=rlogrr+1. The ex­pres­sions (31)(33) provide us with enough in­form­a­tion to fig­ure out what K is. We refer to [e3] for a large-de­vi­ation prin­ciple of the form (29) for a re­lated particle sys­tem known as Ham­mers­ley’s mod­el.

Al­tern­at­ively, we may study the large de­vi­ation of the particle dens­it­ies. For this pur­pose, we define the em­pir­ic­al meas­ure by πε(t,dx)=πε(t,dx;ω)=εiδεi(dx)η(i,t/ε;ω). We re­gard πε as an ele­ment of the Skoro­hod space X=D([0,T];M), where M is the space of loc­ally bounded meas­ures. The law ωπε(t,dx;ω) in­duces a prob­ab­il­ity meas­ure Pε on X.

The hy­dro­dynam­ic lim­it for the ex­clu­sion pro­cess means that PεP where P is con­cen­trated on the single en­tropy solu­tion of (34)ρ¯t=(ρ¯(1ρ¯))x for a giv­en ini­tial data ρ¯(x,0)=ρ¯0(x). The func­tion ρ^ is re­lated to the mac­ro­scop­ic height func­tion u¯ by ρ¯=u¯x. In [1], a large-de­vi­ation prin­ciple has been es­tab­lished for the con­ver­gence of Pε. Roughly, (35)Pε(πε(t,dx) is near μ(t,dx))eε1I(μ) with the fol­low­ing rate func­tion I: First, I(μ)=+ un­less μ(t,dx)=m(x,t)dx and m is a weak solu­tion of (34). However, when 0<I(m)<, then m is a non-en­trop­ic solu­tion of (34). In fact I(μ)=I0(μ)+Idyn(μ), where I0(μ) is the large-de­vi­ation rate func­tion com­ing from the ini­tial de­vi­ation and de­pends only on our choice of ini­tial con­fig­ur­a­tions, and Idyn(μ) is the con­tri­bu­tion com­ing from dy­nam­ics and quant­it­at­ively meas­ures how the en­tropy con­di­tion is vi­ol­ated. By “en­tropy con­di­tion” we mean that, for a pair (φ,q) with φ con­vex and φH¯1=q for H¯1(p)=p(1p), we have (36)φ(ρ¯)t+q(ρ¯)x0 in the weak sense. The left-hand side is a neg­at­ive dis­tri­bu­tion, which can only be a neg­at­ive meas­ure. As our dis­cus­sions around (31) and (32) in­dic­ate, the in­vari­ant meas­ures play an es­sen­tial role in de­term­in­ing the large-de­vi­ations rate func­tion. As it turns out, the rel­ev­ant φ to choose is simply the large-de­vi­ation rate func­tion for the in­vari­ant meas­ure, which is giv­en by φ(m)=mlogm+(1m)log(1m)+log2. Here, for the in­vari­ant meas­ure we choose a Bernoulli meas­ure ν un­der which (η(i):iZ) are in­de­pend­ent and ν(η(i)=1)=1/2. To meas­ure the fail­ure of the en­tropy solu­tion, we take a weak solu­tion m for which the cor­res­pond­ing φ(m)t+q(p)x=γ=γ+γ is a meas­ure, with γ+ and γ rep­res­ent­ing the pos­it­ive and neg­at­ive part of γ. We now have Idyn(μ)=γ+(R×[0,T]).

It is cus­tom­ary in equi­lib­ri­um stat­ist­ic­al mech­an­ics to rep­res­ent a state as a prob­ab­il­ity meas­ure with dens­ity (1/Z)eβH, with H some type of en­ergy and Z the nor­mal­iz­ing con­stant. In non-equi­lib­ri­um stat­ist­ic­al mech­an­ics, a large-de­vi­ation prin­ciple of the form (35) of­fers an ana­log­ous ex­pres­sion, with I(μ) play­ing the role of “ef­fect­ive” en­ergy (or, rather, po­ten­tial). What we learn from [1] is that, after the en­tropy solu­tion, the most fre­quently vis­ited con­fig­ur­a­tions are those as­so­ci­ated with non-en­trop­ic solu­tions, and the en­trop­ic price for such vis­its is meas­ured by the amount the in­equal­ity (36) fails. Even though the en­tropy solu­tions for scal­ar con­ser­va­tion laws are rather well un­der­stood, our un­der­stand­ing of non-en­trop­ic solu­tions is rather poor, per­haps be­cause we had no reas­on to pay at­ten­tion to them be­fore. The re­mark­able work [1] urges us to look more deeply in­to non-en­trop­ic solu­tions for gain­ing in­sight in­to the way the mi­cro­scop­ic dens­it­ies de­vi­ate from the solu­tion to the mac­ro­scop­ic equa­tions.

Works

[1]L. H. Jensen and S. R. S. Varadhan: Large de­vi­ations of the asym­met­ric ex­clu­sion pro­cess in one di­men­sion. Pre­print, 2000.

[2]S. R. S. Varadhan: “Large de­vi­ations for ran­dom walks in a ran­dom en­vir­on­ment,” Comm. Pure Ap­pl. Math. 56 : 8 (August 2003), pp. 1222–​1245. Ded­ic­ated to the memory of Jür­gen K. Moser. MR 1989232 Zbl 1042.​60071 article

[3]S. R. S. Varadhan: “Large de­vi­ations for the asym­met­ric simple ex­clu­sion pro­cess,” pp. 1–​27 in Stochast­ic ana­lys­is on large scale in­ter­act­ing sys­tems. Edi­ted by T. Fun­aki and H. Os­ada. Ad­vanced Stud­ies in Pure Math­em­at­ics 39. Math. Soc. Ja­pan (Tokyo), 2004. MR 2073328 Zbl 1114.​60026 incollection

[4]S. R. S. Varadhan: “Ran­dom walks in a ran­dom en­vir­on­ment,” Proc. In­di­an Acad. Sci. Math. Sci. 114 : 4 (2004), pp. 309–​318. MR 2067696 Zbl 1077.​60078 ArXiv math/​0503089 article

[5]E. Kosy­gina, F. Reza­khan­lou, and S. R. S. Varadhan: “Stochast­ic ho­mo­gen­iz­a­tion of Hamilton–Jac­obi–Bell­man equa­tions,” Comm. Pure Ap­pl. Math. 59 : 10 (2006), pp. 1489–​1521. MR 2248897 Zbl 1111.​60055 article

[6]S. R. S. Varadhan: “Ho­mo­gen­iz­a­tion,” Math. Stu­dent 76 : 1–​4 (2007), pp. 129–​136. MR 2522935 Zbl 1182.​35023 article

[7]S. R. S. Varadhan: “Ho­mo­gen­iz­a­tion of ran­dom Hamilton–Jac­obi–Bell­man equa­tions,” pp. 397–​403 in Prob­ab­il­ity, geo­metry and in­teg­rable sys­tems. Edi­ted by M. Pin­sky and B. Birnir. Math­em­at­ic­al Sci­en­cies Re­search In­sti­tute Pub­lic­a­tions 55. Cam­bridge Uni­versity Press, 2008. MR 2407606 Zbl 1160.​35334 incollection

[8]E. Kosy­gina and S. R. S. Varadhan: “Ho­mo­gen­iz­a­tion of Hamilton–Jac­obi–Bell­man equa­tions with re­spect to time-space shifts in a sta­tion­ary er­god­ic me­di­um,” Comm. Pure Ap­pl. Math. 61 : 6 (2008), pp. 816–​847. MR 2400607 Zbl 1144.​35008 article