So let's take our minds off such painful, laborious sports and do something relaxing, like analysis. Yes. Analysis.
I feel that most pure mathematicians that I know have very little tolerance or appreciation for analysis, and I am certainly no hipster-analysis-lover. But I've been looking at Osgood, Phillips and Sarnak's paper on finding the (unique) nicest metric compatible with the complex structure of a Riemann surface R. To be bluntly honest, I haven't been exposed to very much functional analysis, and the idea here is sorta cool. Let's try to really crudely summarise the schematic of what I suspect is a generic strategy in this area:
- Take a big vector space W of functions, like a Sobolev space, and define a functional F:W→R on it. Often, the functional is measuring some sort of energy these functions defined on a Riemannian manifold.
- Show that F:W→R is strictly convex (concave), this means that the global minimum (maximum) of F is unique.
- Plonk W in a bigger space H - often a Lp space, something like the Rellich-Kondrachov theorem is then used to show that a sequence of functions in W whose energies tend to the infimum of F has a convergent subsequence tending to some φ∈H.
- Use stuff like the elliptic regularity theorem to show that φ is a lot smoother than your average function in H and is in fact in W and hence unique by point 2.
During the process of going through and verifying each step of this proof there were two things that I couldn't get my head around. The first I'm still paranoid about, and I'll probably ask about it on MathOverflow in a day or two. It has to do with whether elliptic regularity holds for sufficiently nice nonlinear elliptic operators. The second had to do with why a strongly convergent sequence of measurable functions in Lp(Ω⊂(Rn,dV)) always has a subsequence that converges pointwise amost everywhere (a.e.) to a measurable function.
Let's prove this jazz.
So we've got a sequence of measurable real functions fn:Ω→R
in Lp(Ω) and we want to show that there's a subsequence that converges pointwise a.e.. Let's begin by figuring out what pointwise convergence actually means.
In order for the sequence {fn} to converge to some function f, it needs to get arbitrarily close to f as n blows up. That is: for any x∈Ω and any ϵ>0, lim supn→∞|fn(x)−f(x)|≯ϵ.
And since we're dealing with a.e. pointwise convergence, let's just turn this into a measure theoretic statement: for any ϵ>0, Vol({x∈Ω:lim supn→∞|fn(x)−f(x)|>ϵ})=0.
Okay, good, we've heuristically justified why this condition is equivalent to pointwise a.e. convergence.
Now, just by interpreting what lim sup is actually doing, we see that Vol({x∈Ω:lim supn→∞|fn(x)−f(x)|>ϵ})=limn→∞Vol(∞⋃k=n{x:|fk(x)−f(x)|>ϵ}).
So if we can show that Vol(∞⋃k=n{x:|fk(x)−f(x)|>ϵ})≤∞∑k=nVol({x:|fk(x)−f(x)|>ϵ})
tends to 0 then we're done. But of course, one way of showing that the tail of a series converges to 0 is to show that the series converges.
Now, if we knew that Vol({x:|fk(x)−f(x)|>ϵ}) tended to 0 we could easily take a subsequence {fnm} so that Vol({x:|fnm(x)−f(x)|>ϵ})<2−m
and this subsequence would have to converge pointwise a.e. to f - which is precisely the type of conclusion that we'd like. So everything boils down to using strong convergence to get that Vol({x:|fk(x)−f(x)|>ϵ})→0 as n→∞.
This, however, can be shown as follows:Vol({x:|fk(x)−f(x)|>ϵ})=Vol({x:ϵ−1|fk(x)−f(x)|>1})=∫{x:ϵ−1|fk(x)−f(x)|>1}dV<∫{x:ϵ−1|fk(x)−f(x)|>1}ϵ−p|fk(x)−f(x)|pdV≤∫Ωϵ−p|fk(x)−f(x)|pdV=ϵ−p‖fn−f‖pp.
Since strong convergence is defined to mean that ‖fn−f‖p tends to 0 and ϵ is just some fixed constant, this proves our claim. Oh, and I should probably point out that ‖⋅‖p is notation for the norm on Lp(Ω).
I really ought to have finished writing this post before the match ended. Oh well. =)
No comments:
Post a Comment