This is the eleventh thread for the Polymath8b project to obtain new bounds for the quantity

H_m := \liminf_{n \to\infty} p_{n+m} - p_n;

the previous thread may be found here.

The main focus is now on writing up the results, with a draft paper close to completion here (with the directory of source files here).    Most of the sections are now written up more or less completely, with the exception of the appendix on narrow admissible tuples, which was awaiting the bounds on such tuples to stabilise.  There is now also an acknowledgments section (linking to the corresponding page on the wiki, which participants should check to see if their affiliations etc. are posted correctly), and in the final remarks section there is now also some discussion about potential improvements to the H_m bounds.  I’ve also added some mention of a recent paper of Banks, Freiberg and Maynard which makes use of some of our results (in particular, that M_{50,1/25} > 4).  On the other hand, the portions of the writeup relating to potential improvements to the MPZ estimates have been commented out, as it appears that one cannot easily obtain the exponential sum estimates required to make those go through.  (Perhaps, if there are significant new developments, one could incorporate them into a putative Polymath8c project, although at present I think there’s not much urgency to start over once again.)

Regarding the numerics in Section 7 of the paper, one thing which is missing at present is some links to code in case future readers wish to verify the results; alternatively one could include such code and data into the arXiv submission.

It’s about time to discuss possible journals to submit the paper to.  Ken Ono has invited us to submit to his new journal, “Research in the Mathematical Sciences“.  Another option would be to submit to the same journal “Algebra & Number Theory” that is currently handling our Polymath8a paper (no news on the submission there, but it is a very long paper), although I think the papers are independent enough that it is not absolutely necessary to place them in the same journal.  A third natural choice is “Mathematics of Computation“, though I should note that when the Polymath4 paper was submitted there, the editors required us to use our real names instead of the D.H.J. Polymath pseudonym as it would have messed up their metadata system otherwise.  (But I can check with the editor there before submitting to see if there is some workaround now, perhaps their policies have changed.)  At present I have no strong preferences regarding journal selection, and would welcome further thoughts and proposals.  (It is perhaps best to avoid the journals that I am editor or associate editor of, namely Amer. J. Math, Forum of Mathematics, Analysis & PDE, and Dynamics and PDE, due to conflict of interest (and in the latter two cases, specialisation to a different area of mathematics)).

 

 

 

 

Many fluid equations are expected to exhibit turbulence in their solutions, in which a significant portion of their energy ends up in high frequency modes. A typical example arises from the three-dimensional periodic Navier-Stokes equations

\displaystyle  \partial_t u + u \cdot \nabla u = \nu \Delta u + \nabla p + f

\displaystyle  \nabla \cdot u = 0

where {u: {\bf R} \times {\bf R}^3/{\bf Z}^3 \rightarrow {\bf R}^3} is the velocity field, {f: {\bf R} \times {\bf R}^3/{\bf Z}^3 \rightarrow {\bf R}^3} is a forcing term, {p: {\bf R} \times {\bf R}^3/{\bf Z}^3 \rightarrow {\bf R}} is a pressure field, and {\nu > 0} is the viscosity. To study the dynamics of energy for this system, we first pass to the Fourier transform

\displaystyle  \hat u(t,k) := \int_{{\bf R}^3/{\bf Z}^3} u(t,x) e^{-2\pi i k \cdot x}

so that the system becomes

\displaystyle  \partial_t \hat u(t,k) + 2\pi \sum_{k = k_1 + k_2} (\hat u(t,k_1) \cdot ik_2) \hat u(t,k_2) =

\displaystyle  - 4\pi^2 \nu |k|^2 \hat u(t,k) + 2\pi ik \hat p(t,k) + \hat f(t,k) \ \ \ \ \ (1)

\displaystyle  k \cdot \hat u(t,k) = 0.

We may normalise {u} (and {f}) to have mean zero, so that {\hat u(t,0)=0}. Then we introduce the dyadic energies

\displaystyle  E_N(t) := \sum_{|k| \sim N} |\hat u(t,k)|^2

where {N \geq 1} ranges over the powers of two, and {|k| \sim N} is shorthand for {N \leq |k| < 2N}. Taking the inner product of (1) with {\hat u(t,k)}, we obtain the energy flow equation

\displaystyle  \partial_t E_N = \sum_{N_1,N_2} \Pi_{N,N_1,N_2} - D_N + F_N \ \ \ \ \ (2)

where {N_1,N_2} range over powers of two, {\Pi_{N,N_1,N_2}} is the energy flow rate

\displaystyle  \Pi_{N,N_1,N_2} := -2\pi \sum_{k=k_1+k_2: |k| \sim N, |k_1| \sim N_1, |k_2| \sim N_2}

\displaystyle  (\hat u(t,k_1) \cdot ik_2) (\hat u(t,k) \cdot \hat u(t,k_2)),

{D_N} is the energy dissipation rate

\displaystyle  D_N := 4\pi^2 \nu \sum_{|k| \sim N} |k|^2 |\hat u(t,k)|^2

and {F_N} is the energy injection rate

\displaystyle  F_N := \sum_{|k| \sim N} \hat u(t,k) \cdot \hat f(t,k).

The Navier-Stokes equations are notoriously difficult to solve in general. Despite this, Kolmogorov in 1941 was able to give a convincing heuristic argument for what the distribution of the dyadic energies {E_N} should become over long times, assuming that some sort of distributional steady state is reached. It is common to present this argument in the form of dimensional analysis, but one can also give a more “first principles” form Kolmogorov’s argument, which I will do here. Heuristically, one can divide the frequency scales {N} into three regimes:

  • The injection regime in which the energy injection rate {F_N} dominates the right-hand side of (2);
  • The energy flow regime in which the flow rates {\Pi_{N,N_1,N_2}} dominate the right-hand side of (2); and
  • The dissipation regime in which the dissipation {D_N} dominates the right-hand side of (2).

If we assume a fairly steady and smooth forcing term {f}, then {\hat f} will be supported on the low frequency modes {k=O(1)}, and so we heuristically expect the injection regime to consist of the low scales {N=O(1)}. Conversely, if we take the viscosity {\nu} to be small, we expect the dissipation regime to only occur for very large frequencies {N}, with the energy flow regime occupying the intermediate frequencies.

We can heuristically predict the dividing line between the energy flow regime. Of all the flow rates {\Pi_{N,N_1,N_2}}, it turns out in practice that the terms in which {N_1,N_2 = N+O(1)} (i.e., interactions between comparable scales, rather than widely separated scales) will dominate the other flow rates, so we will focus just on these terms. It is convenient to return back to physical space, decomposing the velocity field {u} into Littlewood-Paley components

\displaystyle  u_N(t,x) := \sum_{|k| \sim N} \hat u(t,k) e^{2\pi i k \cdot x}

of the velocity field {u(t,x)} at frequency {N}. By Plancherel’s theorem, this field will have an {L^2} norm of {E_N(t)^{1/2}}, and as a naive model of turbulence we expect this field to be spread out more or less uniformly on the torus, so we have the heuristic

\displaystyle  |u_N(t,x)| = O( E_N(t)^{1/2} ),

and a similar heuristic applied to {\nabla u_N} gives

\displaystyle  |\nabla u_N(t,x)| = O( N E_N(t)^{1/2} ).

(One can consider modifications of the Kolmogorov model in which {u_N} is concentrated on a lower-dimensional subset of the three-dimensional torus, leading to some changes in the numerology below, but we will not consider such variants here.) Since

\displaystyle  \Pi_{N,N_1,N_2} = - \int_{{\bf R}^3/{\bf Z}^3} u_N \cdot ( (u_{N_1} \cdot \nabla) u_{N_2} )\ dx

we thus arrive at the heuristic

\displaystyle  \Pi_{N,N_1,N_2} = O( N_2 E_N^{1/2} E_{N_1}^{1/2} E_{N_2}^{1/2} ).

Of course, there is the possibility that due to significant cancellation, the energy flow is significantly less than {O( N E_N(t)^{3/2} )}, but we will assume that cancellation effects are not that significant, so that we typically have

\displaystyle  \Pi_{N,N_1,N_2} \sim N_2 E_N^{1/2} E_{N_1}^{1/2} E_{N_2}^{1/2} \ \ \ \ \ (3)

or (assuming that {E_N} does not oscillate too much in {N}, and {N_1,N_2} are close to {N})

\displaystyle  \Pi_{N,N_1,N_2} \sim N E_N^{3/2}.

On the other hand, we clearly have

\displaystyle  D_N \sim \nu N^2 E_N.

We thus expect to be in the dissipation regime when

\displaystyle  N \gtrsim \nu^{-1} E_N^{1/2} \ \ \ \ \ (4)

and in the energy flow regime when

\displaystyle  1 \lesssim N \lesssim \nu^{-1} E_N^{1/2}. \ \ \ \ \ (5)

Now we study the energy flow regime further. We assume a “statistically scale-invariant” dynamics in this regime, in particular assuming a power law

\displaystyle  E_N \sim A N^{-\alpha} \ \ \ \ \ (6)

for some {A,\alpha > 0}. From (3), we then expect an average asymptotic of the form

\displaystyle  \Pi_{N,N_1,N_2} \approx A^{3/2} c_{N,N_1,N_2} (N N_1 N_2)^{1/3 - \alpha/2} \ \ \ \ \ (7)

for some structure constants {c_{N,N_1,N_2} \sim 1} that depend on the exact nature of the turbulence; here we have replaced the factor {N_2} by the comparable term {(N N_1 N_2)^{1/3}} to make things more symmetric. In order to attain a steady state in the energy flow regime, we thus need a cancellation in the structure constants:

\displaystyle  \sum_{N_1,N_2} c_{N,N_1,N_2} (N N_1 N_2)^{1/3 - \alpha/2} \approx 0. \ \ \ \ \ (8)

On the other hand, if one is assuming statistical scale invariance, we expect the structure constants to be scale-invariant (in the energy flow regime), in that

\displaystyle  c_{\lambda N, \lambda N_1, \lambda N_2} = c_{N,N_1,N_2} \ \ \ \ \ (9)

for dyadic {\lambda > 0}. Also, since the Euler equations conserve energy, the energy flows {\Pi_{N,N_1,N_2}} symmetrise to zero,

\displaystyle  \Pi_{N,N_1,N_2} + \Pi_{N,N_2,N_1} + \Pi_{N_1,N,N_2} + \Pi_{N_1,N_2,N} + \Pi_{N_2,N,N_1} + \Pi_{N_2,N_1,N} = 0,

which from (7) suggests a similar cancellation among the structure constants

\displaystyle  c_{N,N_1,N_2} + c_{N,N_2,N_1} + c_{N_1,N,N_2} + c_{N_1,N_2,N} + c_{N_2,N,N_1} + c_{N_2,N_1,N} \approx 0.

Combining this with the scale-invariance (9), we see that for fixed {N}, we may organise the structure constants {c_{N,N_1,N_2}} for dyadic {N_1,N_2} into sextuples which sum to zero (including some degenerate tuples of order less than six). This will automatically guarantee the cancellation (8) required for a steady state energy distribution, provided that

\displaystyle  \frac{1}{3} - \frac{\alpha}{2} = 0

or in other words

\displaystyle  \alpha = \frac{2}{3};

for any other value of {\alpha}, there is no particular reason to expect this cancellation (8) to hold. Thus we are led to the heuristic conclusion that the most stable power law distribution for the energies {E_N} is the {2/3} law

\displaystyle  E_N \sim A N^{-2/3} \ \ \ \ \ (10)

or in terms of shell energies, we have the famous Kolmogorov 5/3 law

\displaystyle  \sum_{|k| = k_0 + O(1)} |\hat u(t,k)|^2 \sim A k_0^{-5/3}.

Given that frequency interactions tend to cascade from low frequencies to high (if only because there are so many more high frequencies than low ones), the above analysis predicts a stablising effect around this power law: scales at which a law (6) holds for some {\alpha > 2/3} are likely to lose energy in the near-term, while scales at which a law (6) hold for some {\alpha< 2/3} are conversely expected to gain energy, thus nudging the exponent of power law towards {2/3}.

We can solve for {A} in terms of energy dissipation as follows. If we let {N_*} be the frequency scale demarcating the transition from the energy flow regime (5) to the dissipation regime (4), we have

\displaystyle  N_* \sim \nu^{-1} E_{N_*}

and hence by (10)

\displaystyle  N_* \sim \nu^{-1} A N_*^{-2/3}.

On the other hand, if we let {\epsilon := D_{N_*}} be the energy dissipation at this scale {N_*} (which we expect to be the dominant scale of energy dissipation), we have

\displaystyle  \epsilon \sim \nu N_*^2 E_N \sim \nu N_*^2 A N_*^{-2/3}.

Some simple algebra then lets us solve for {A} and {N_*} as

\displaystyle  N_* \sim (\frac{\epsilon}{\nu^3})^{1/4}

and

\displaystyle  A \sim \epsilon^{2/3}.

Thus, we have the Kolmogorov prediction

\displaystyle  \sum_{|k| = k_0 + O(1)} |\hat u(t,k)|^2 \sim \epsilon^{2/3} k_0^{-5/3}

for

\displaystyle  1 \lesssim k_0 \lesssim (\frac{\epsilon}{\nu^3})^{1/4}

with energy dissipation occuring at the high end {k_0 \sim (\frac{\epsilon}{\nu^3})^{1/4}} of this scale, which is counterbalanced by the energy injection at the low end {k_0 \sim 1} of the scale.

Let {V} be a quasiprojective variety defined over a finite field {{\bf F}_q}, thus for instance {V} could be an affine variety

\displaystyle  V = \{ x \in {\bf A}^d: P_1(x) = \dots = P_m(x) = 0\} \ \ \ \ \ (1)

where {{\bf A}^d} is {d}-dimensional affine space and {P_1,\dots,P_m: {\bf A}^d \rightarrow {\bf A}} are a finite collection of polynomials with coefficients in {{\bf F}_q}. Then one can define the set {V[{\bf F}_q]} of {{\bf F}_q}-rational points, and more generally the set {V[{\bf F}_{q^n}]} of {{\bf F}_{q^n}}-rational points for any {n \geq 1}, since {{\bf F}_{q^n}} can be viewed as a field extension of {{\bf F}_q}. Thus for instance in the affine case (1) we have

\displaystyle  V[{\bf F}_{q^n}] := \{ x \in {\bf F}_{q^n}^d: P_1(x) = \dots = P_m(x) = 0\}.

The Weil conjectures are concerned with understanding the number

\displaystyle  S_n := |V[{\bf F}_{q^n}]| \ \ \ \ \ (2)

of {{\bf F}_{q^n}}-rational points over a variety {V}. The first of these conjectures was proven by Dwork, and can be phrased as follows.

Theorem 1 (Rationality of the zeta function) Let {V} be a quasiprojective variety defined over a finite field {{\bf F}_q}, and let {S_n} be given by (2). Then there exist a finite number of algebraic integers {\alpha_1,\dots,\alpha_k, \beta_1,\dots,\beta_{k'} \in O_{\overline{{\bf Q}}}} (known as characteristic values of {V}), such that

\displaystyle  S_n = \alpha_1^n + \dots + \alpha_k^n - \beta_1^n - \dots - \beta_{k'}^n

for all {n \geq 1}.

After cancelling, we may of course assume that {\alpha_i \neq \beta_j} for any {i=1,\dots,k} and {j=1,\dots,k'}, and then it is easy to see (as we will see below) that the {\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'}} become uniquely determined up to permutations of the {\alpha_1,\dots,\alpha_k} and {\beta_1,\dots,\beta_{k'}}. These values are known as the characteristic values of {V}. Since {S_n} is a rational integer (i.e. an element of {{\bf Z}}) rather than merely an algebraic integer (i.e. an element of the ring of integers {O_{\overline{{\bf Q}}}} of the algebraic closure {\overline{{\bf Q}}} of {{\bf Q}}), we conclude from the above-mentioned uniqueness that the set of characteristic values are invariant with respect to the Galois group {Gal(\overline{{\bf Q}} / {\bf Q} )}. To emphasise this Galois invariance, we will not fix a specific embedding {\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}} of the algebraic numbers into the complex field {{\bf C} = {\bf C}_\infty}, but work with all such embeddings simultaneously. (Thus, for instance, {\overline{{\bf Q}}} contains three cube roots of {2}, but which of these is assigned to the complex numbers {2^{1/3}}, {e^{2\pi i/3} 2^{1/3}}, {e^{4\pi i/3} 2^{1/3}} will depend on the choice of embedding {\iota_\infty}.)

An equivalent way of phrasing Dwork’s theorem is that the ({T}-form of the) zeta function

\displaystyle \zeta_V(T) := \exp( \sum_{n=1}^\infty \frac{S_n}{n} T^n )

associated to {V} (which is well defined as a formal power series in {T}, at least) is equal to a rational function of {T} (with the {\alpha_1,\dots,\alpha_k} and {\beta_1,\dots,\beta_{k'}} being the poles and zeroes of {\zeta_V} respectively). Here, we use the formal exponential

\displaystyle  \exp(X) := 1 + X + \frac{X^2}{2!} + \frac{X^3}{3!} + \dots.

Equivalently, the ({s}-form of the) zeta-function {s \mapsto \zeta_V(q^{-s})} is a meromorphic function on the complex numbers {{\bf C}} which is also periodic with period {2\pi i/\log q}, and which has only finitely many poles and zeroes up to this periodicity.

Dwork’s argument relies primarily on {p}-adic analysis – an analogue of complex analysis, but over an algebraically complete (and metrically complete) extension {{\bf C}_p} of the {p}-adic field {{\bf Q}_p}, rather than over the Archimedean complex numbers {{\bf C}}. The argument is quite effective, and in particular gives explicit upper bounds for the number {k+k'} of characteristic values in terms of the complexity of the variety {V}; for instance, in the affine case (1) with {V} of degree {D}, Bombieri used Dwork’s methods (in combination with Deligne’s theorem below) to obtain the bound {k+k' \leq (4D+9)^{2d+1}}, and a subsequent paper of Hooley established the slightly weaker bound {k+k' \leq (11D+11)^{d+m+2}} purely from Dwork’s methods (a similar bound had also been pointed out in unpublished work of Dwork). In particular, one has bounds that are uniform in the field {{\bf F}_q}, which is an important fact for many analytic number theory applications.

These {p}-adic arguments stand in contrast with Deligne’s resolution of the last (and deepest) of the Weil conjectures:

Theorem 2 (Riemann hypothesis) Let {V} be a quasiprojective variety defined over a finite field {{\bf F}_q}, and let {\lambda \in \overline{{\bf Q}}} be a characteristic value of {V}. Then there exists a natural number {w} such that {|\iota_\infty(\lambda)|_\infty = q^{w/2}} for every embedding {\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}, where {| |_\infty} denotes the usual absolute value on the complex numbers {{\bf C} = {\bf C}_\infty}. (Informally: {\lambda} and all of its Galois conjugates have complex magnitude {q^{w/2}}.)

To put it another way that closely resembles the classical Riemann hypothesis, all the zeroes and poles of the {s}-form {s \mapsto \zeta_V(q^{-s})} lie on the critical lines {\{ s \in {\bf C}: \hbox{Re}(s) = \frac{w}{2} \}} for {w=0,1,2,\dots}. (See this previous blog post for further comparison of various instantiations of the Riemann hypothesis.) Whereas Dwork uses {p}-adic analysis, Deligne uses the essentially orthogonal technique of ell-adic cohomology to establish his theorem. However, ell-adic methods can be used (via the Grothendieck-Lefschetz trace formula) to establish rationality, and conversely, in this paper of Kedlaya p-adic methods are used to establish the Riemann hypothesis. As pointed out by Kedlaya, the ell-adic methods are tied to the intrinsic geometry of {V} (such as the structure of sheaves and covers over {V}), while the {p}-adic methods are more tied to the extrinsic geometry of {V} (how {V} sits inside its ambient affine or projective space).

In this post, I would like to record my notes on Dwork’s proof of Theorem 1, drawing heavily on the expositions of Serre, Hooley, Koblitz, and others.

The basic strategy is to control the rational integers {S_n} both in an “Archimedean” sense (embedding the rational integers inside the complex numbers {{\bf C}_\infty} with the usual norm {||_\infty}) as well as in the “{p}-adic” sense, with {p} the characteristic of {{\bf F}_q} (embedding the integers now in the “complexification” {{\bf C}_p} of the {p}-adic numbers {{\bf Q}_p}, which is equipped with a norm {||_p} that we will recall later). (This is in contrast to the methods of ell-adic cohomology, in which one primarily works over an {\ell}-adic field {{\bf Q}_\ell} with {\ell \neq p,\infty}.) The Archimedean control is trivial:

Proposition 3 (Archimedean control of {S_n}) With {S_n} as above, and any embedding {\iota_\infty: \overline{{\bf Q}} \rightarrow {\bf C}}, we have

\displaystyle  |\iota_\infty(S_n)|_\infty \leq C q^{A n}

for all {n} and some {C, A >0} independent of {n}.

Proof: Since {S_n} is a rational integer, {|\iota_\infty(S_n)|_\infty} is just {|S_n|_\infty}. By decomposing {V} into affine pieces, we may assume that {V} is of the affine form (1), then we trivially have {|S_n|_\infty \leq q^{nd}}, and the claim follows. \Box

Another way of thinking about this Archimedean control is that it guarantees that the zeta function {T \mapsto \zeta_V(T)} can be defined holomorphically on the open disk in {{\bf C}_\infty} of radius {q^{-A}} centred at the origin.

The {p}-adic control is significantly more difficult, and is the main component of Dwork’s argument:

Proposition 4 ({p}-adic control of {S_n}) With {S_n} as above, and using an embedding {\iota_p: \overline{{\bf Q}} \rightarrow {\bf C}_p} (defined later) with {p} the characteristic of {{\bf F}_q}, we can find for any real {A > 0} a finite number of elements {\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'} \in {\bf C}_p} such that

\displaystyle  |\iota_p(S_n) - (\alpha_1^n + \dots + \alpha_k^n - \beta_1^n - \dots - \beta_{k'}^n)|_p \leq q^{-An}

for all {n}.

Another way of thinking about this {p}-adic control is that it guarantees that the zeta function {T \mapsto \zeta_V(T)} can be defined meromorphically on the entire {p}-adic complex field {{\bf C}_p}.

Proposition 4 is ostensibly much weaker than Theorem 1 because of (a) the error term of {p}-adic magnitude at most {Cq^{-An}}; (b) the fact that the number {k+k'} of potential characteristic values here may go to infinity as {A \rightarrow \infty}; and (c) the potential characteristic values {\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_{k'}} only exist inside the complexified {p}-adics {{\bf C}_p}, rather than in the algebraic integers {O_{\overline{{\bf Q}}}}. However, it turns out that by combining {p}-adic control on {S_n} in Proposition 4 with the trivial control on {S_n} in Proposition 3, one can obtain Theorem 1 by an elementary argument that does not use any further properties of {S_n} (other than the obvious fact that the {S_n} are rational integers), with the {A} in Proposition 4 chosen to exceed the {A} in Proposition 3. We give this argument (essentially due to Borel) below the fold.

The proof of Proposition 4 can be split into two pieces. The first piece, which can be viewed as the number-theoretic component of the proof, uses external descriptions of {V} such as (1) to obtain the following decomposition of {S_n}:

Proposition 5 (Decomposition of {S_n}) With {\iota_p} and {S_n} as above, we can decompose {\iota_p(S_n)} as a finite linear combination (over the integers) of sequences {S'_n \in {\bf C}_p}, such that for each such sequence {n \mapsto S'_n}, the zeta functions

\displaystyle  \zeta'(T) := \exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n ) = \sum_{n=0}^\infty c_n T^n

are entire in {{\bf C}_p}, by which we mean that

\displaystyle  |c_n|_p^{1/n} \rightarrow 0

as {n \rightarrow \infty}.

This proposition will ultimately be a consequence of the properties of the Teichmuller lifting {\tau: \overline{{\bf F}_p}^\times \rightarrow {\bf C}_p^\times}.

The second piece, which can be viewed as the “{p}-adic complex analytic” component of the proof, relates the {p}-adic entire nature of a zeta function with control on the associated sequence {S'_n}, and can be interpreted (after some manipulation) as a {p}-adic version of the Weierstrass preparation theorem:

Proposition 6 ({p}-adic Weierstrass preparation theorem) Let {S'_n} be a sequence in {{\bf C}_p}, such that the zeta function

\displaystyle  \zeta'(T) := \exp( \sum_{n=1}^\infty \frac{S'_n}{n} T^n )

is entire in {{\bf C}_p}. Then for any real {A > 0}, there exist a finite number of elements {\beta_1,\dots,\beta_{k'} \in {\bf C}_p} such that

\displaystyle  |\iota_p(S'_n) + \beta_1^n + \dots + \beta_{k'}^n|_p \leq q^{-An}

for all {n} and some {C>0}.

Clearly, the combination of Proposition 5 and Proposition 6 (and the non-Archimedean nature of the {||_p} norm) imply Proposition 4.

Read the rest of this entry »

This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.

Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.

One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:

Conjecture 1 (Kakeya conjecture) Let {E} be a subset of {{\bf R}^3} that contains a unit line segment in every direction. Then {\hbox{dim}(E) = 3}.

This conjecture is not precisely formulated here, because we have not specified exactly what type of set {E} is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):

Conjecture 2 (Kakeya conjecture, again) Let {{\cal L}} be a family of lines in {{\bf R}^3} that meet {B(0,1)} and contain a line in each direction. Let {E} be the union of the restriction {\ell \cap B(0,2)} to {B(0,2)} of every line {\ell} in {{\cal L}}. Then {\hbox{dim}(E) = 3}.

As the space of all directions in {{\bf R}^3} is two-dimensional, we thus see that {{\cal L}} is an (at least) two-dimensional subset of the four-dimensional space of lines in {{\bf R}^3} (actually, it lies in a compact subset of this space, since we have constrained the lines to meet {B(0,1)}). One could then ask if this is the only property of {{\cal L}} that is needed to establish the Kakeya conjecture, that is to say if any subset of {B(0,2)} which contains a two-dimensional family of lines (restricted to {B(0,2)}, and meeting {B(0,1)}) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in {B(0,2)} (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:

Conjecture 3 (Strong Kakeya conjecture) Let {{\cal L}} be a two-dimensional family of lines in {{\bf R}^3} that meet {B(0,1)}, and assume the Wolff axiom that no (affine) plane contains more than a one-dimensional family of lines in {{\cal L}}. Let {E} be the union of the restriction {\ell \cap B(0,2)} of every line {\ell} in {{\cal L}}. Then {\hbox{dim}(E) = 3}.

Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie close to a plane, rather than exactly on the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.

In 1995, Wolff established the important lower bound {\hbox{dim}(E) \geq 5/2} (for various notions of dimension, e.g. Hausdorff dimension) for sets {E} in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the {5/2} barrier, coming from the possible existence of half-dimensional (approximate) subfields of the reals {{\bf R}}. To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:

Conjecture 4 (Strong Kakeya conjecture over {{\bf C}}) Let {{\cal L}} be a four (real) dimensional family of complex lines in {{\bf C}^3} that meet the unit ball {B(0,1)} in {{\bf C}^3}, and assume the Wolff axiom that no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in {{\cal L}}. Let {E} be the union of the restriction {\ell \cap B(0,2)} of every complex line {\ell} in {{\cal L}}. Then {E} has real dimension {6}.

The argument of Wolff can be adapted to the complex case to show that all sets {E} occuring in Conjecture 4 have real dimension at least {5}. Unfortunately, this is sharp, due to the following fundamental counterexample:

Proposition 5 (Heisenberg group counterexample) Let {H \subset {\bf C}^3} be the Heisenberg group

\displaystyle  H = \{ (z_1,z_2,z_3) \in {\bf C}^3: \hbox{Im}(z_1) = \hbox{Im}(z_2 \overline{z_3}) \}

and let {{\cal L}} be the family of complex lines

\displaystyle  \ell_{s,t,\alpha} := \{ (\overline{\alpha} z + t, z, sz + \alpha): z \in {\bf C} \}

with {s,t \in {\bf R}} and {\alpha \in {\bf C}}. Then {H} is a five (real) dimensional subset of {{\bf C}^3} that contains every line in the four (real) dimensional set {{\cal L}}; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in {{\cal L}}. In particular, the strong Kakeya conjecture over the complex numbers is false.

This proposition is proven by a routine computation, which we omit here. The group structure on {H} is given by the group law

\displaystyle  (z_1,z_2,z_3) \cdot (w_1,w_2,w_3) = (z_1 + w_1 + z_2 \overline{w_3} - z_3 \overline{w_2}, z_2 +w_2, z_3+w_3),

giving {E} the structure of a {2}-step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over {{\bf R}^2}. Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines {{\cal L}} in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines

\displaystyle  \ell_{0,t,0} = \{ (t, z, 0): z \in {\bf C}\}

with {t \in {\bf R}}; multiplying this family of lines on the right by a group element in {H} gives other families of parallel lines, which in fact sweep out all of {{\cal L}}.

The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield {{\bf R}} of {{\bf C}}, which induces an involution {z \mapsto \overline{z}} which can then be used to define the Heisenberg group {H} through the formula

\displaystyle  H = \{ (z_1,z_2,z_3) \in {\bf C}^3: z_1 - \overline{z_1} = z_2 \overline{z_3} - z_3 \overline{z_2} \}.

Analogous Heisenberg counterexamples can also be constructed if one works over finite fields {{\bf F}_{q^2}} that contain a “half-dimensional” subfield {{\bf F}_q}; we leave the details to the interested reader. Morally speaking, if {{\bf R}} in turn contained a subfield of dimension {1/2} (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.

We thus see that to go beyond the {5/2} dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:

  • (a) Exploit the distinct directions of the lines in {{\mathcal L}} in a way that goes beyond the Wolff axiom; or
  • (b) Exploit the fact that {{\bf R}} does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).

(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)

Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of {5/2} for Kakeya sets very slightly to {5/2+10^{-10}} (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of {{\bf F}_p}, and then pursued route (b) to obtain a corresponding improvement {5/2+\epsilon} to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.

Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:

  1. Assume that the (strong) Kakeya conjecture fails, so that there are sets {E} of the form in Conjecture 3 of dimension {3-\sigma} for some {\sigma>0}. Assume that {E} is “optimal”, in the sense that {\sigma} is as large as possible.
  2. Use the optimality of {E} (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets {E}, namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining {E} to “behave like” a putative Heisenberg group counterexample.
  3. By playing all these structural properties off of each other, show that {E} can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.

Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set {E} for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.

Read the rest of this entry »

Let {{\bf F}_q} be a finite field of order {q = p^n}, and let {C} be an absolutely irreducible smooth projective curve defined over {{\bf F}_q} (and hence over the algebraic closure {k := \overline{{\bf F}_q}} of that field). For instance, {C} could be the projective elliptic curve

\displaystyle C = \{ [x,y,z]: y^2 z = x^3 + ax z^2 + b z^3 \}

in the projective plane {{\bf P}^2 = \{ [x,y,z]: (x,y,z) \neq (0,0,0) \}}, where {a,b \in {\bf F}_q} are coefficients whose discriminant {-16(4a^3+27b^2)} is non-vanishing, which is the projective version of the affine elliptic curve

\displaystyle \{ (x,y): y^2 = x^3 + ax + b \}.

To each such curve {C} one can associate a genus {g}, which we will define later; for instance, elliptic curves have genus {1}. We can also count the cardinality {|C({\bf F}_q)|} of the set {C({\bf F}_q)} of {{\bf F}_q}-points of {C}. The Hasse-Weil bound relates the two:

Theorem 1 (Hasse-Weil bound) {||C({\bf F}_q)| - q - 1| \leq 2g\sqrt{q}}.

The usual proofs of this bound proceed by first establishing a trace formula of the form

\displaystyle |C({\bf F}_{p^n})| = p^n - \sum_{i=1}^{2g} \alpha_i^n + 1 \ \ \ \ \ (1)

 

for some complex numbers {\alpha_1,\dots,\alpha_{2g}} independent of {n}; this is in fact a special case of the Lefschetz-Grothendieck trace formula, and can be interpreted as an assertion that the zeta function associated to the curve {C} is rational. The task is then to establish a bound {|\alpha_i| \leq \sqrt{p}} for all {i=1,\dots,2g}; this (or more precisely, the slightly stronger assertion {|\alpha_i| = \sqrt{p}}) is the Riemann hypothesis for such curves. This can be done either by passing to the Jacobian variety of {C} and using a certain duality available on the cohomology of such varieties, known as Rosati involution; alternatively, one can pass to the product surface {C \times C} and apply the Riemann-Roch theorem for that surface.

In 1969, Stepanov introduced an elementary method (a version of what is now known as the polynomial method) to count (or at least to upper bound) the quantity {|C({\bf F}_q)|}. The method was initially restricted to hyperelliptic curves, but was soon extended to general curves. In particular, Bombieri used this method to give a short proof of the following weaker version of the Hasse-Weil bound:

Theorem 2 (Weak Hasse-Weil bound) If {q} is a perfect square, and {q \geq (g+1)^4}, then {|C({\bf F}_q)| \leq q + (2g+1) \sqrt{q} + 1}.

In fact, the bound on {|C({\bf F}_q)|} can be sharpened a little bit further, as we will soon see.

Theorem 2 is only an upper bound on {|C({\bf F}_q)|}, but there is a Galois-theoretic trick to convert (a slight generalisation of) this upper bound to a matching lower bound, and if one then uses the trace formula (1) (and the “tensor power trick” of sending {n} to infinity to control the weights {\alpha_i}) one can then recover the full Hasse-Weil bound. We discuss these steps below the fold.

I’ve discussed Bombieri’s proof of Theorem 2 in this previous post (in the special case of hyperelliptic curves), but now wish to present the full proof, with some minor simplifications from Bombieri’s original presentation; it is mostly elementary, with the deepest fact from algebraic geometry needed being Riemann’s inequality (a weak form of the Riemann-Roch theorem).

The first step is to reinterpret {|C({\bf F}_q)|} as the number of points of intersection between two curves {C_1,C_2} in the surface {C \times C}. Indeed, if we define the Frobenius endomorphism {\hbox{Frob}_q} on any projective space by

\displaystyle \hbox{Frob}_q( [x_0,\dots,x_n] ) := [x_0^q, \dots, x_n^q]

then this map preserves the curve {C}, and the fixed points of this map are precisely the {{\bf F}_q} points of {C}:

\displaystyle C({\bf F}_q) = \{ z \in C: \hbox{Frob}_q(z) = z \}.

Thus one can interpret {|C({\bf F}_q)|} as the number of points of intersection between the diagonal curve

\displaystyle \{ (z,z): z \in C \}

and the Frobenius graph

\displaystyle \{ (z, \hbox{Frob}_q(z)): z \in C \}

which are copies of {C} inside {C \times C}. But we can use the additional hypothesis that {q} is a perfect square to write this more symmetrically, by taking advantage of the fact that the Frobenius map has a square root

\displaystyle \hbox{Frob}_q = \hbox{Frob}_{\sqrt{q}}^2

with {\hbox{Frob}_{\sqrt{q}}} also preserving {C}. One can then also interpret {|C({\bf F}_q)|} as the number of points of intersection between the curve

\displaystyle C_1 := \{ (z, \hbox{Frob}_{\sqrt{q}}(z)): z \in C \} \ \ \ \ \ (2)

 

and its transpose

\displaystyle C_2 := \{ (\hbox{Frob}_{\sqrt{q}}(w), w): w \in C \}.

Let {k(C \times C)} be the field of rational functions on {C \times C} (with coefficients in {k}), and define {k(C_1)}, {k(C_2)}, and {k(C_1 \cap C_2)} analogously )(although {C_1 \cap C_2} is likely to be disconnected, so {k(C_1 \cap C_2)} will just be a ring rather than a field. We then (morally) have the commuting square

\displaystyle \begin{array}{ccccc} && k(C \times C) && \\ & \swarrow & & \searrow & \\ k(C_1) & & & & k(C_2) \\ & \searrow & & \swarrow & \\ && k(C_1 \cap C_2) && \end{array},

if we ignore the issue that a rational function on, say, {C \times C}, might blow up on all of {C_1} and thus not have a well-defined restriction to {C_1}. We use {\pi_1: k(C \times C) \rightarrow k(C_1)} and {\pi_2: k(C \times C) \rightarrow k(C_2)} to denote the restriction maps. Furthermore, we have obvious isomorphisms {\iota_1: k(C_1) \rightarrow k(C)}, {\iota_2: k(C_2) \rightarrow k(C)} coming from composing with the graphing maps {z \mapsto (z, \hbox{Frob}_{\sqrt{q}}(z))} and {w \mapsto (\hbox{Frob}_{\sqrt{q}}(w), w)}.

The idea now is to find a rational function {f \in k(C \times C)} on the surface {C \times C} of controlled degree which vanishes when restricted to {C_1}, but is non-vanishing (and not blowing up) when restricted to {C_2}. On {C_2}, we thus get a non-zero rational function {f \downharpoonright_{C_2}} of controlled degree which vanishes on {C_1 \cap C_2} – which then lets us bound the cardinality of {C_1 \cap C_2} in terms of the degree of {f \downharpoonright_{C_2}}. (In Bombieri’s original argument, one required vanishing to high order on the {C_1} side, but in our presentation, we have factored out a {\hbox{Frob}_{\sqrt{q}}} term which removes this high order vanishing condition.)

To find this {f}, we will use linear algebra. Namely, we will locate a finite-dimensional subspace {V} of {k(C \times C)} (consisting of certain “controlled degree” rational functions) which projects injectively to {k(C_2)}, but whose projection to {k(C_1)} has strictly smaller dimension than {V} itself. The rank-nullity theorem then forces the existence of a non-zero element {P} of {V} whose projection to {k(C_1)} vanishes, but whose projection to {k(C_2)} is non-zero.

Now we build {V}. Pick a {{\bf F}_q} point {P_\infty} of {C}, which we will think of as being a point at infinity. (For the purposes of proving Theorem 2, we may clearly assume that {C({\bf F}_q)} is non-empty.) Thus {P_\infty} is fixed by {\hbox{Frob}_q}. To simplify the exposition, we will also assume that {P_\infty} is fixed by the square root {\hbox{Frob}_{\sqrt{q}}} of {\hbox{Frob}_q}; in the opposite case when {\hbox{Frob}_{\sqrt{q}}} has order two when acting on {P_\infty}, the argument is essentially the same, but all references to {P_\infty} in the second factor of {C \times C} need to be replaced by {\hbox{Frob}_{\sqrt{q}} P_\infty} (we leave the details to the interested reader).

For any natural number {n}, define {R_n} to be the set of rational functions {f \in k(C)} which are allowed to have a pole of order up to {n} at {P_\infty}, but have no other poles on {C}; note that as we are assuming {C} to be smooth, it is unambiguous what a pole is (and what order it will have). (In the fancier language of divisors and Cech cohomology, we have {R_n = H^0( C, {\mathcal O}_C(-n P_\infty) )}.) The space {R_n} is clearly a vector space over {k}; one can view intuitively as the space of “polynomials” on {C} of “degree” at most {n}. When {n=0}, {R_0} consists just of the constant functions. Indeed, if {f \in R_0}, then the image {f(C)} of {f} avoids {\infty} and so lies in the affine line {k = {\mathbf P}^1 \backslash \{\infty\}}; but as {C} is projective, the image {f(C)} needs to be compact (hence closed) in {{\mathbf P}^1}, and must therefore be a point, giving the claim.

For higher {n \geq 1}, we have the easy relations

\displaystyle \hbox{dim}(R_{n-1}) \leq \hbox{dim}(R_n) \leq \hbox{dim}(R_{n-1})+1. \ \ \ \ \ (3)

 

The former inequality just comes from the trivial inclusion {R_{n-1} \subset R_n}. For the latter, observe that if two functions {f, g} lie in {R_n}, so that they each have a pole of order at most {n} at {P_\infty}, then some linear combination of these functions must have a pole of order at most {n-1} at {P_\infty}; thus {R_{n-1}} has codimension at most one in {R_n}, giving the claim.

From (3) and induction we see that each of the {R_n} are finite dimensional, with the trivial upper bound

\displaystyle \hbox{dim}(R_n) \leq n+1. \ \ \ \ \ (4)

 

Riemann’s inequality complements this with the lower bound

\displaystyle \hbox{dim}(R_n) \geq n+1-g, \ \ \ \ \ (5)

 

thus one has {\hbox{dim}(R_n) = \hbox{dim}(R_{n-1})+1} for all but at most {g} exceptions (in fact, exactly {g} exceptions as it turns out). This is a consequence of the Riemann-Roch theorem; it can be proven from abstract nonsense (the snake lemma) if one defines the genus {g} in a non-standard fashion (as the dimension of the first Cech cohomology {H^1(C)} of the structure sheaf {{\mathcal O}_C} of {C}), but to obtain this inequality with a standard definition of {g} (e.g. as the dimension of the zeroth Cech cohomolgy {H^0(C, \Omega_C^1)} of the line bundle of differentials) requires the more non-trivial tool of Serre duality.

At any rate, now that we have these vector spaces {R_n}, we will define {V \subset k(C \times C)} to be a tensor product space

\displaystyle V = R_\ell \otimes R_m

for some natural numbers {\ell, m \geq 0} which we will optimise in later. That is to say, {V} is spanned by functions of the form {(z,w) \mapsto f(z) g(w)} with {f \in R_\ell} and {g \in R_m}. This is clearly a linear subspace of {k(C \times C)} of dimension {\hbox{dim}(R_\ell) \hbox{dim}(R_m)}, and hence by Rieman’s inequality we have

\displaystyle \hbox{dim}(V) \geq (\ell+1-g) (m+1-g) \ \ \ \ \ (6)

 

if

\displaystyle \ell,m \geq g-1. \ \ \ \ \ (7)

 

Observe that {\iota_1 \circ \pi_1} maps a tensor product {(z,w) \mapsto f(z) g(w)} to a function {z \mapsto f(z) g(\hbox{Frob}_{\sqrt{q}} z)}. If {f \in R_\ell} and {g \in R_m}, then we see that the function {z \mapsto f(z) g(\hbox{Frob}_{\sqrt{q}} z)} has a pole of order at most {\ell+m\sqrt{q}} at {P_\infty}. We conclude that

\displaystyle \iota_1 \circ \pi_1( V ) \subset R_{\ell + m\sqrt{q}} \ \ \ \ \ (8)

 

and in particular by (4)

\displaystyle \hbox{dim}(\pi_1(V)) \leq \ell + m \sqrt{q} + 1 \ \ \ \ \ (9)

 

and similarly

\displaystyle \hbox{dim}(\pi_2(V)) \leq \ell \sqrt{q} + m + 1. \ \ \ \ \ (10)

 

We will choose {m} to be a bit bigger than {\ell}, to make the {\pi_2} image of {V} smaller than that of {\pi_1}. From (6), (10) we see that if we have the inequality

\displaystyle (\ell+1-g) (m+1-g) > \ell \sqrt{q}+m + 1 \ \ \ \ \ (11)

 

(together with (7)) then {\pi_2} cannot be injective.

On the other hand, we have the following basic fact:

Lemma 3 (Injectivity) If

\displaystyle \ell < \sqrt{q}, \ \ \ \ \ (12)

 

then {\pi_1: V \rightarrow \pi_1(V)} is injective.

Proof: From (3), we can find a linear basis {f_1,\dots,f_a} of {R_\ell} such that each of the {f_i} has a distinct order {d_i} of pole at {P_\infty} (somewhere between {0} and {\ell} inclusive). Similarly, we may find a linear basis {g_1,\dots,g_b} of {R_m} such that each of the {g_j} has a distinct order {e_j} of pole at {P_\infty} (somewhere between {0} and {m} inclusive). The functions {z \mapsto f_i(z) g_j(\hbox{Frob}_{\sqrt{q}} z)} then span {\iota_1(\pi_1(V))}, and the order of pole at {P_\infty} is {d_i + \sqrt{q} e_j}. But since {\ell < \sqrt{q}}, these orders are all distinct, and so these functions must be linearly independent. The claim follows. \Box

This gives us the following bound:

Proposition 4 Let {\ell,m} be natural numbers such that (7), (11), (12) hold. Then {|C({\bf F}_q)| \leq \ell + m \sqrt{q}}.

Proof: As {\pi_2} is not injective, we can find {f \in V} with {\pi_2(f)} vanishing. By the above lemma, the function {\iota_1(\pi_1(f))} is then non-zero, but it must also vanish on {\iota_1(C_1 \cap C_2)}, which has cardinality {|C({\bf F}_q)|}. On the other hand, by (8), {\iota_1(\pi_1(f))} has a pole of order at most {\ell+m\sqrt{q}} at {P_\infty} and no other poles. Since the number of poles and zeroes of a rational function on a projective curve must add up to zero, the claim follows. \Box

If {q \geq (g+1)^4}, we may make the explicit choice

\displaystyle m := \sqrt{q}+2g; \quad \ell := \lfloor \frac{g}{g+1} \sqrt{q} \rfloor + g + 1

and a brief calculation then gives Theorem 2. In some cases one can optimise things a bit further. For instance, in the genus zero case {g=0} (e.g. if {C} is just the projective line {{\mathbf P}^1}) one may take {\ell=1, m = \sqrt{q}} and conclude the absolutely sharp bound {|C({\bf F}_q)| \leq q+1} in this case; in the case of the projective line {{\mathbf P}^1}, the function {f} is in fact the very concrete function {f(z,w) := z - w^{\sqrt{q}}}.

Remark 1 When {q = p^{2n+1}} is not a perfect square, one can try to run the above argument using the factorisation {\hbox{Frob}_q = \hbox{Frob}_{p^n} \hbox{Frob}_{p^{n+1}}} instead of {\hbox{Frob}_q = \hbox{Frob}_{\sqrt{q}} \hbox{Frob}_{\sqrt{q}}}. This gives a weaker version of the above bound, of the shape {|C({\bf F}_q)| \leq q + O( \sqrt{p} \sqrt{q} )}. In the hyperelliptic case at least, one can erase this loss by working with a variant of the argument in which one requires {f} to vanish to high order at {C_1}, rather than just to first order; see this survey article of mine for details.

Read the rest of this entry »

Roth’s theorem on arithmetic progressions asserts that every subset of the integers {{\bf Z}} of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:

Theorem 1 (Roth’s theorem) Let {G = (G,+)} be a compact abelian group, with Haar probability measure {\mu}, which is {2}-divisible (i.e. the map {x \mapsto 2x} is surjective) and let {A} be a measurable subset of {G} with {\mu(A) \geq \alpha} for some {0 < \alpha < 1}. Then we have

\displaystyle  \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r)\ d\mu(x) d\mu(r) \gg_\alpha 1,

where {X \gg_\alpha Y} denotes the bound {X \geq c_\alpha Y} for some {c_\alpha > 0} depending only on {\alpha}.

This theorem is usually formulated in the case that {G} is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group {G = {\bf Z}/N{\bf Z}} of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of {2}-divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant {c_\alpha} on {\alpha}, but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the {2}-divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the {2r} shift in that case.

We can deduce Theorem 1 from the following more general Khintchine-type statement. Let {\hat G} denote the Pontryagin dual of a compact abelian group {G}, that is to say the set of all continuous homomorphisms {\xi: x \mapsto \xi \cdot x} from {G} to the (additive) unit circle {{\bf R}/{\bf Z}}. Thus {\hat G} is a discrete abelian group, and functions {f \in L^2(G)} have a Fourier transform {\hat f \in \ell^2(\hat G)} defined by

\displaystyle  \hat f(\xi) := \int_G f(x) e^{-2\pi i \xi \cdot x}\ d\mu(x).

If {G} is {2}-divisible, then {\hat G} is {2}-torsion-free in the sense that the map {\xi \mapsto 2 \xi} is injective. For any finite set {S \subset \hat G} and any radius {\rho>0}, define the Bohr set

\displaystyle  B(S,\rho) := \{ x \in G: \sup_{\xi \in S} \| \xi \cdot x \|_{{\bf R}/{\bf Z}} < \rho \}

where {\|\theta\|_{{\bf R}/{\bf Z}}} denotes the distance of {\theta} to the nearest integer. We refer to the cardinality {|S|} of {S} as the rank of the Bohr set. We record a simple volume bound on Bohr sets:

Lemma 2 (Volume packing bound) Let {G} be a compact abelian group with Haar probability measure {\mu}. For any Bohr set {B(S,\rho)}, we have

\displaystyle  \mu( B( S, \rho ) ) \gg_{|S|, \rho} 1.

Proof: We can cover the torus {({\bf R}/{\bf Z})^S} by {O_{|S|,\rho}(1)} translates {\theta+Q} of the cube {Q := \{ (\theta_\xi)_{\xi \in S} \in ({\bf R}/{\bf Z})^S: \sup_{\xi \in S} \|\theta_\xi\|_{{\bf R}/{\bf Z}} < \rho/2 \}}. Then the sets {\{ x \in G: (\xi \cdot x)_{\xi \in S} \in \theta + Q \}} form an cover of {G}. But all of these sets lie in a translate of {B(S,\rho)}, and the claim then follows from the translation invariance of {\mu}. \Box

Given any Bohr set {B(S,\rho)}, we define a normalised “Lipschitz” cutoff function {\nu_{B(S,\rho)}: G \rightarrow {\bf R}} by the formula

\displaystyle  \nu_{B(S,\rho)}(x) = c_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})_+ \ \ \ \ \ (1)

where {c_{B(S,\rho)}} is the constant such that

\displaystyle  \int_G \nu_{B(S,\rho)}\ d\mu = 1,

thus

\displaystyle c_{B(S,\rho)} = \left( \int_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})\ d\mu(x) \right)^{-1}.

The function {\nu_{B(S,\rho)}} should be viewed as an {L^1}-normalised “tent function” cutoff to {B(S,\rho)}. Note from Lemma 2 that

\displaystyle  1 \ll_{|S|,\rho} c_{B(S,\rho)} \ll_{|S|,\rho} 1. \ \ \ \ \ (2)

We then have the following sharper version of Theorem 1:

Theorem 3 (Roth-Khintchine theorem) Let {G = (G,+)} be a {2}-divisible compact abelian group, with Haar probability measure {\mu}, and let {\epsilon>0}. Then for any measurable function {f: G \rightarrow [0,1]}, there exists a Bohr set {B(S,\rho)} with {|S| \ll_\epsilon 1} and {\rho \gg_\epsilon 1} such that

\displaystyle  \int_G \int_G f(x) f(x+r) f(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \ \ \ \ \ (3)

\displaystyle  \geq (\int_G f\ d\mu)^3 - O(\epsilon)

where {*} denotes the convolution operation

\displaystyle  f*g(x) := \int_G f(y) g(x-y)\ d\mu(y).

A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with {f := 1_A} and {\epsilon} equal to a small multiple of {\alpha^3} to conclude that there is a Bohr set {B(S,\rho)} with {|S| \ll_\alpha 1} and {\rho \gg_\alpha 1} such that

\displaystyle  \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \gg \alpha^3.

But from (2) we have the pointwise bound {\nu_{B(S,\rho)}*\nu_{B(S,\rho)} \ll_\alpha 1}, and Theorem 1 follows.

Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set {B(S,\rho)} to capture all the “large Fourier coefficients” of {f}, but such that a certain “dilate” of {B(S,\rho)} does not capture much more Fourier energy of {f} than {B(S,\rho)} itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of {f} into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the {\nu_{B(S,\rho)}} considered above to achieve a similar effect.

Read the rest of this entry »

This is the tenth thread for the Polymath8b project to obtain new bounds for the quantity

H_m := \liminf_{n \to\infty} p_{n+m} - p_n;

the previous thread may be found here.

Numerical progress on these bounds have slowed in recent months, although we have very recently lowered the unconditional bound on H_1 from 252 to 246 (see the wiki page for more detailed results).  While there may still be scope for further improvement (particularly with respect to bounds for H_m with m=2,3,4,5, which we have not focused on for a while, it looks like we have reached the point of diminishing returns, and it is time to turn to the task of writing up the results.

A draft version of the paper so far may be found here (with the directory of source files here).  Currently, the introduction and the sieve-theoretic portions of the paper are written up, although the sieve-theoretic arguments are surprisingly lengthy, and some simplification (or other reorganisation) may well be possible.  Other portions of the paper that have not yet been written up include the asymptotic analysis of M_k for large k (leading in particular to results for m=2,3,4,5), and a description of the quadratic programming that is used to estimate M_k for small and medium k.  Also we will eventually need an appendix to summarise the material from Polymath8a that we would use to generate various narrow admissible tuples.

One issue here is that our current unconditional bounds on H_m for m=2,3,4,5 rely on a distributional estimate on the primes which we believed to be true in Polymath8a, but never actually worked out (among other things, there was some delicate algebraic geometry issues concerning the vanishing of certain cohomology groups that was never resolved).  This issue does not affect the m=1 calculations, which only use the Bombieri-Vinogradov theorem or else assume the generalised Elliott-Halberstam conjecture.  As such, we will have to rework the computations for these H_m, given that the task of trying to attain the conjectured distributional estimate on the primes would be a significant amount of work that is rather disjoint from the rest of the Polymath8b writeup.  One could simply dust off the old maple code for this (e.g. one could tweak the code here, with the constraint  1080*varpi/13+ 330*delta/13<1  being replaced by 600*varpi/7+180*delta/7<1), but there is also a chance that our asymptotic bounds for M_k (currently given in messy detail here) could be sharpened.  I plan to look at this issue fairly soon.

Also, there are a number of smaller observations (e.g. the parity problem barrier that prevents us from ever getting a better bound on H_1 than 6) that should also go into the paper at some point; the current outline of the paper as given in the draft is not necessarily comprehensive.

Let {f: {\bf R}^3 \rightarrow {\bf R}} be an irreducible polynomial in three variables. As {{\bf R}} is not algebraically closed, the zero set {Z_{\bf R}(f) = \{ x \in{\bf R}^3: f(x)=0\}} can split into various components of dimension between {0} and {2}. For instance, if {f(x_1,x_2,x_3) = x_1^2+x_2^2}, the zero set {Z_{\bf R}(f)} is a line; more interestingly, if {f(x_1,x_2,x_3) = x_3^2 + x_2^2 - x_2^3}, then {Z_{\bf R}(f)} is the union of a line and a surface (or the product of an acnodal cubic curve with a line). We will assume that the {2}-dimensional component {Z_{{\bf R},2}(f)} is non-empty, thus defining a real surface in {{\bf R}^3}. In particular, this hypothesis implies that {f} is not just irreducible over {{\bf R}}, but is in fact absolutely irreducible (i.e. irreducible over {{\bf C}}), since otherwise one could use the complex factorisation of {f} to contain {Z_{\bf R}(f)} inside the intersection {{\bf Z}_{\bf C}(g) \cap {\bf Z}_{\bf C}(\bar{g})} of the complex zero locus of complex polynomial {g} and its complex conjugate, with {g,\bar{g}} having no common factor, forcing {Z_{\bf R}(f)} to be at most one-dimensional. (For instance, in the case {f(x_1,x_2,x_3)=x_1^2+x_2^2}, one can take {g(z_1,z_2,z_3) = z_1 + i z_2}.) Among other things, this makes {{\bf Z}_{{\bf R},2}(f)} a Zariski-dense subset of {{\bf Z}_{\bf C}(f)}, thus any polynomial identity which holds true at every point of {{\bf Z}_{{\bf R},2}(f)}, also holds true on all of {{\bf Z}_{\bf C}(f)}. This allows us to easily use tools from algebraic geometry in this real setting, even though the reals are not quite algebraically closed.

The surface {Z_{{\bf R},2}(f)} is said to be ruled if, for a Zariski open dense set of points {x \in Z_{{\bf R},2}(f)}, there exists a line {l_x = \{ x+tv_x: t \in {\bf R} \}} through {x} for some non-zero {v_x \in {\bf R}^3} which is completely contained in {Z_{{\bf R},2}(f)}, thus

\displaystyle f(x+tv_x)=0

for all {t \in {\bf R}}. Also, a point {x \in {\bf Z}_{{\bf R},2}(f)} is said to be a flecnode if there exists a line {l_x = \{ x+tv_x: t \in {\bf R}\}} through {x} for some non-zero {v_x \in {\bf R}^3} which is tangent to {Z_{{\bf R},2}(f)} to third order, in the sense that

\displaystyle f(x+tv_x)=O(t^4)

as {t \rightarrow 0}, or equivalently that

\displaystyle  \frac{d^j}{dt^j} f(x+tv_x)|_{t=0} = 0 \ \ \ \ \ (1)

for {j=0,1,2,3}. Clearly, if {Z_{{\bf R},2}(f)} is a ruled surface, then a Zariski open dense set of points on {Z_{{\bf R},2}} are a flecnode. We then have the remarkable theorem of Cayley and Salmon asserting the converse:

Theorem 1 (Cayley-Salmon theorem) Let {f: {\bf R}^3 \rightarrow {\bf R}} be an irreducible polynomial with {{\bf Z}_{{\bf R},2}} non-empty. Suppose that a Zariski dense set of points in {Z_{{\bf R},2}(f)} are flecnodes. Then {Z_{{\bf R},2}(f)} is a ruled surface.

Among other things, this theorem was used in the celebrated result of Guth and Katz that almost solved the Erdos distance problem in two dimensions, as discussed in this previous blog post. Vanishing to third order is necessary: observe that in a surface of negative curvature, such as the saddle {\{ (x_1,x_2,x_3): x_3 = x_1^2 - x_2^2 \}}, every point on the surface is tangent to second order to a line (the line in the direction for which the second fundamental form vanishes).

The original proof of the Cayley-Salmon theorem, dating back to at least 1915, is not easily accessible and not written in modern language. A modern proof of this theorem (together with substantial generalisations, for instance to higher dimensions) is given by Landsberg; the proof uses the machinery of modern algebraic geometry. The purpose of this post is to record an alternate proof of the Cayley-Salmon theorem based on classical differential geometry (in particular, the notion of torsion of a curve) and basic ODE methods (in particular, Gronwall’s inequality and the Picard existence theorem). The idea is to “integrate” the lines {l_x} indicated by the flecnode to produce smooth curves {\gamma} on the surface {{\bf Z}_{{\bf R},2}}; one then uses the vanishing (1) and some basic calculus to conclude that these curves have zero torsion and are thus planar curves. Some further manipulation using (1) (now just to second order instead of third) then shows that these curves are in fact straight lines, giving the ruling on the surface.

Update: Janos Kollar has informed me that the above theorem was essentially known to Monge in 1809; see his recent arXiv note for more details.

I thank Larry Guth and Micha Sharir for conversations leading to this post.

Read the rest of this entry »

A core foundation of the subject now known as arithmetic combinatorics (and particularly the subfield of additive combinatorics) are the elementary sum set estimates (sometimes known as “Ruzsa calculus”) that relate the cardinality of various sum sets

\displaystyle  A+B := \{ a+b: a \in A, b \in B \}

and difference sets

\displaystyle  A-B := \{ a-b: a \in A, b \in B \},

as well as iterated sumsets such as {3A=A+A+A}, {2A-2A=A+A-A-A}, and so forth. Here, {A, B} are finite non-empty subsets of some additive group {G = (G,+)} (classically one took {G={\bf Z}} or {G={\bf R}}, but nowadays one usually considers more general additive groups). Some basic estimates in this vein are the following:

Lemma 1 (Ruzsa covering lemma) Let {A, B} be finite non-empty subsets of {G}. Then {A} may be covered by at most {\frac{|A+B|}{|B|}} translates of {B-B}.

Proof: Consider a maximal set of disjoint translates {a+B} of {B} by elements {a \in A}. These translates have cardinality {|B|}, are disjoint, and lie in {A+B}, so there are at most {\frac{|A+B|}{|B|}} of them. By maximality, for any {a' \in A}, {a'+B} must intersect at least one of the selected {a+B}, thus {a' \in a+B-B}, and the claim follows. \Box

Lemma 2 (Ruzsa triangle inequality) Let {A,B,C} be finite non-empty subsets of {G}. Then {|A-C| \leq \frac{|A-B| |B-C|}{|B|}}.

Proof: Consider the addition map {+: (x,y) \mapsto x+y} from {(A-B) \times (B-C)} to {G}. Every element {a-c} of {A - C} has a preimage {\{ (x,y) \in (A-B) \times (B-C)\}} of this map of cardinality at least {|B|}, thanks to the obvious identity {a-c = (a-b) + (b-c)} for each {b \in B}. Since {(A-B) \times (B-C)} has cardinality {|A-B| |B-C|}, the claim follows. \Box

Such estimates (which are covered, incidentally, in Section 2 of my book with Van Vu) are particularly useful for controlling finite sets {A} of small doubling, in the sense that {|A+A| \leq K|A|} for some bounded {K}. (There are deeper theorems, most notably Freiman’s theorem, which give more control than what elementary Ruzsa calculus does, however the known bounds in the latter theorem are worse than polynomial in {K} (although it is conjectured otherwise), whereas the elementary estimates are almost all polynomial in {K}.)

However, there are some settings in which the standard sum set estimates are not quite applicable. One such setting is the continuous setting, where one is dealing with bounded open sets in an additive Lie group (e.g. {{\bf R}^n} or a torus {({\bf R}/{\bf Z})^n}) rather than a finite setting. Here, one can largely replicate the discrete sum set estimates by working with a Haar measure in place of cardinality; this is the approach taken for instance in this paper of mine. However, there is another setting, which one might dub the “discretised” setting (as opposed to the “discrete” setting or “continuous” setting), in which the sets {A} remain finite (or at least discretisable to be finite), but for which there is a certain amount of “roundoff error” coming from the discretisation. As a typical example (working now in a non-commutative multiplicative setting rather than an additive one), consider the orthogonal group {O_n({\bf R})} of orthogonal {n \times n} matrices, and let {A} be the matrices obtained by starting with all of the orthogonal matrice in {O_n({\bf R})} and rounding each coefficient of each matrix in this set to the nearest multiple of {\epsilon}, for some small {\epsilon>0}. This forms a finite set (whose cardinality grows as {\epsilon\rightarrow 0} like a certain negative power of {\epsilon}). In the limit {\epsilon \rightarrow 0}, the set {A} is not a set of small doubling in the discrete sense. However, {A \cdot A} is still close to {A} in a metric sense, being contained in the {O_n(\epsilon)}-neighbourhood of {A}. Another key example comes from graphs {\Gamma := \{ (x, f(x)): x \in G \}} of maps {f: A \rightarrow H} from a subset {A} of one additive group {G = (G,+)} to another {H = (H,+)}. If {f} is “approximately additive” in the sense that for all {x,y \in G}, {f(x+y)} is close to {f(x)+f(y)} in some metric, then {\Gamma} might not have small doubling in the discrete sense (because {f(x+y)-f(x)-f(y)} could take a large number of values), but could be considered a set of small doubling in a discretised sense.

One would like to have a sum set (or product set) theory that can handle these cases, particularly in “high-dimensional” settings in which the standard methods of passing back and forth between continuous, discrete, or discretised settings behave poorly from a quantitative point of view due to the exponentially large doubling constant of balls. One way to do this is to impose a translation invariant metric {d} on the underlying group {G = (G,+)} (reverting back to additive notation), and replace the notion of cardinality by that of metric entropy. There are a number of almost equivalent ways to define this concept:

Definition 3 Let {(X,d)} be a metric space, let {E} be a subset of {X}, and let {r>0} be a radius.

  • The packing number {N^{pack}_r(E)} is the largest number of points {x_1,\dots,x_n} one can pack inside {E} such that the balls {B(x_1,r),\dots,B(x_n,r)} are disjoint.
  • The internal covering number {N^{int}_r(E)} is the fewest number of points {x_1,\dots,x_n \in E} such that the balls {B(x_1,r),\dots,B(x_n,r)} cover {E}.
  • The external covering number {N^{ext}_r(E)} is the fewest number of points {x_1,\dots,x_n \in X} such that the balls {B(x_1,r),\dots,B(x_n,r)} cover {E}.
  • The metric entropy {N^{ent}_r(E)} is the largest number of points {x_1,\dots,x_n} one can find in {E} that are {r}-separated, thus {d(x_i,x_j) \geq r} for all {i \neq j}.

It is an easy exercise to verify the inequalities

\displaystyle  N^{ent}_{2r}(E) \leq N^{pack}_r(E) \leq N^{ext}_r(E) \leq N^{int}_r(E) \leq N^{ent}_r(E)

for any {r>0}, and that {N^*_r(E)} is non-increasing in {r} and non-decreasing in {E} for the three choices {* = pack,ext,ent} (but monotonicity in {E} can fail for {*=int}!). It turns out that the external covering number {N^{ent}_r(E)} is slightly more convenient than the other notions of metric entropy, so we will abbreviate {N_r(E) = N^{ent}_r(E)}. The cardinality {|E|} can be viewed as the limit of the entropies {N^*_r(E)} as {r \rightarrow 0}.

If we have the bounded doubling property that {B(0,2r)} is covered by {O(1)} translates of {B(0,r)} for each {r>0}, and one has a Haar measure {m} on {G} which assigns a positive finite mass to each ball, then any of the above entropies {N^*_r(E)} is comparable to {m( E + B(0,r) ) / m(B(0,r))}, as can be seen by simple volume packing arguments. Thus in the bounded doubling setting one can usually use the measure-theoretic sum set theory to derive entropy-theoretic sumset bounds (see e.g. this paper of mine for an example of this). However, it turns out that even in the absence of bounded doubling, one still has an entropy analogue of most of the elementary sum set theory, except that one has to accept some degradation in the radius parameter {r} by some absolute constant. Such losses can be acceptable in applications in which the underlying sets {A} are largely “transverse” to the balls {B(0,r)}, so that the {N_r}-entropy of {A} is largely independent of {A}; this is a situation which arises in particular in the case of graphs {\Gamma = \{ (x,f(x)): x \in G \}} discussed above, if one works with “vertical” metrics whose balls extend primarily in the vertical direction. (I hope to present a specific application of this type here in the near future.)

Henceforth we work in an additive group {G} equipped with a translation-invariant metric {d}. (One can also generalise things slightly by allowing the metric to attain the values {0} or {+\infty}, without changing much of the analysis below.) By the Heine-Borel theorem, any precompact set {E} will have finite entropy {N_r(E)} for any {r>0}. We now have analogues of the two basic Ruzsa lemmas above:

Lemma 4 (Ruzsa covering lemma) Let {A, B} be precompact non-empty subsets of {G}, and let {r>0}. Then {A} may be covered by at most {\frac{N_r(A+B)}{N_r(B)}} translates of {B-B+B(0,2r)}.

Proof: Let {a_1,\dots,a_n \in A} be a maximal set of points such that the sets {a_i + B + B(0,r)} are all disjoint. Then the sets {a_i+B} are disjoint in {A+B} and have entropy {N_r(a_i+B)=N_r(B)}, and furthermore any ball of radius {r} can intersect at most one of the {a_i+B}. We conclude that {N_r(A+B) \geq n N_r(B)}, so {n \leq \frac{N_r(A+B)}{N_r(B)}}. If {a \in A}, then {a+B+B(0,r)} must intersect one of the {a_i + B + B(0,r)}, so {a \in a_i + B-B + B(0,2r)}, and the claim follows. \Box

Lemma 5 (Ruzsa triangle inequality) Let {A,B,C} be precompact non-empty subsets of {G}, and let {r>0}. Then {N_{4r}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(B)}}.

Proof: Consider the addition map {+: (x,y) \mapsto x+y} from {(A-B) \times (B-C)} to {G}. The domain {(A-B) \times (B-C)} may be covered by {N_r(A-B) N_r(B-C)} product balls {B(x,r) \times B(y,r)}. Every element {a-c} of {A - C} has a preimage {\{ (x,y) \in (A-B) \times (B-C)\}} of this map which projects to a translate of {B}, and thus must meet at least {N_r(B)} of these product balls. However, if two elements of {A-C} are separated by a distance of at least {4r}, then no product ball can intersect both preimages. We thus see that {N_{4r}^{ent}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(A-C)}}, and the claim follows. \Box

Below the fold we will record some further metric entropy analogues of sum set estimates (basically redoing much of Chapter 2 of my book with Van Vu). Unfortunately there does not seem to be a direct way to abstractly deduce metric entropy results from their sum set analogues (basically due to the failure of a certain strong version of Freiman’s theorem, as discussed in this previous post); nevertheless, the proofs of the discrete arguments are elementary enough that they can be modified with a small amount of effort to handle the entropy case. (In fact, there should be a very general model-theoretic framework in which both the discrete and entropy arguments can be processed in a unified manner; see this paper of Hrushovski for one such framework.)

It is also likely that many of the arguments here extend to the non-commutative setting, but for simplicity we will not pursue such generalisations here.

Read the rest of this entry »

As in the previous post, all computations here are at the formal level only.

In the previous blog post, the Euler equations for inviscid incompressible fluid flow were interpreted in a Lagrangian fashion, and then Noether’s theorem invoked to derive the known conservation laws for these equations. In a bit more detail: starting with Lagrangian space {{\cal L} = ({\bf R}^n, \hbox{vol})} and Eulerian space {{\cal E} = ({\bf R}^n, \eta, \hbox{vol})}, we let {M} be the space of volume-preserving, orientation-preserving maps {\Phi: {\cal L} \rightarrow {\cal E}} from Lagrangian space to Eulerian space. Given a curve {\Phi: {\bf R} \rightarrow M}, we can define the Lagrangian velocity field {\dot \Phi: {\bf R} \times {\cal L} \rightarrow T{\cal E}} as the time derivative of {\Phi}, and the Eulerian velocity field {u := \dot \Phi \circ \Phi^{-1}: {\bf R} \times {\cal E} \rightarrow T{\cal E}}. The volume-preserving nature of {\Phi} ensures that {u} is a divergence-free vector field:

\displaystyle  \nabla \cdot u = 0. \ \ \ \ \ (1)

If we formally define the functional

\displaystyle  J[\Phi] := \frac{1}{2} \int_{\bf R} \int_{{\cal E}} |u(t,x)|^2\ dx dt = \frac{1}{2} \int_R \int_{{\cal L}} |\dot \Phi(t,x)|^2\ dx dt

then one can show that the critical points of this functional (with appropriate boundary conditions) obey the Euler equations

\displaystyle  [\partial_t + u \cdot \nabla] u = - \nabla p

\displaystyle  \nabla \cdot u = 0

for some pressure field {p: {\bf R} \times {\cal E} \rightarrow {\bf R}}. As discussed in the previous post, the time translation symmetry of this functional yields conservation of the Hamiltonian

\displaystyle  \frac{1}{2} \int_{{\cal E}} |u(t,x)|^2\ dx = \frac{1}{2} \int_{{\cal L}} |\dot \Phi(t,x)|^2\ dx;

the rigid motion symmetries of Eulerian space give conservation of the total momentum

\displaystyle  \int_{{\cal E}} u(t,x)\ dx

and total angular momentum

\displaystyle  \int_{{\cal E}} x \wedge u(t,x)\ dx;

and the diffeomorphism symmetries of Lagrangian space give conservation of circulation

\displaystyle  \int_{\Phi(\gamma)} u^*

for any closed loop {\gamma} in {{\cal L}}, or equivalently pointwise conservation of the Lagrangian vorticity {\Phi^* \omega = \Phi^* du^*}, where {u^*} is the {1}-form associated with the vector field {u} using the Euclidean metric {\eta} on {{\cal E}}, with {\Phi^*} denoting pullback by {\Phi}.

It turns out that one can generalise the above calculations. Given any self-adjoint operator {A} on divergence-free vector fields {u: {\cal E} \rightarrow {\bf R}}, we can define the functional

\displaystyle  J_A[\Phi] := \frac{1}{2} \int_{\bf R} \int_{{\cal E}} u(t,x) \cdot A u(t,x)\ dx dt;

as we shall see below the fold, critical points of this functional (with appropriate boundary conditions) obey the generalised Euler equations

\displaystyle  [\partial_t + u \cdot \nabla] Au + (\nabla u) \cdot Au= - \nabla \tilde p \ \ \ \ \ (2)

\displaystyle  \nabla \cdot u = 0

for some pressure field {\tilde p: {\bf R} \times {\cal E} \rightarrow {\bf R}}, where {(\nabla u) \cdot Au} in coordinates is {\partial_i u_j Au_j} with the usual summation conventions. (When {A=1}, {(\nabla u) \cdot Au = \nabla(\frac{1}{2} |u|^2)}, and this term can be absorbed into the pressure {\tilde p}, and we recover the usual Euler equations.) Time translation symmetry then gives conservation of the Hamiltonian

\displaystyle  \frac{1}{2} \int_{{\cal E}} u(t,x) \cdot A u(t,x)\ dx.

If the operator {A} commutes with rigid motions on {{\cal E}}, then we have conservation of total momentum

\displaystyle  \int_{{\cal E}} Au(t,x)\ dx

and total angular momentum

\displaystyle  \int_{{\cal E}} x \wedge Au(t,x)\ dx,

and the diffeomorphism symmetries of Lagrangian space give conservation of circulation

\displaystyle  \int_{\Phi(\gamma)} (Au)^*

or pointwise conservation of the Lagrangian vorticity {\Phi^* \theta := \Phi^* d(Au)^*}. These applications of Noether’s theorem proceed exactly as the previous post; we leave the details to the interested reader.

One particular special case of interest arises in two dimensions {n=2}, when {A} is the inverse derivative {A = |\nabla|^{-1} = (-\Delta)^{-1/2}}. The vorticity {\theta = d(Au)^*} is a {2}-form, which in the two-dimensional setting may be identified with a scalar. In coordinates, if we write {u = (u_1,u_2)}, then

\displaystyle  \theta = \partial_{x_1} |\nabla|^{-1} u_2 - \partial_{x_2} |\nabla|^{-1} u_1.

Since {u} is also divergence-free, we may therefore write

\displaystyle  u = (- \partial_{x_2} \psi, \partial_{x_1} \psi )

where the stream function {\psi} is given by the formula

\displaystyle  \psi = |\nabla|^{-1} \theta.

If we take the curl of the generalised Euler equation (2), we obtain (after some computation) the surface quasi-geostrophic equation

\displaystyle  [\partial_t + u \cdot \nabla] \theta = 0 \ \ \ \ \ (3)

\displaystyle  u = (-\partial_{x_2} |\nabla|^{-1} \theta, \partial_{x_1} |\nabla|^{-1} \theta).

This equation has strong analogies with the three-dimensional incompressible Euler equations, and can be viewed as a simplified model for that system; see this paper of Constantin, Majda, and Tabak for details.

Now we can specialise the general conservation laws derived previously to this setting. The conserved Hamiltonian is

\displaystyle  \frac{1}{2} \int_{{\bf R}^2} u\cdot |\nabla|^{-1} u\ dx = \frac{1}{2} \int_{{\bf R}^2} \theta \psi\ dx = \frac{1}{2} \int_{{\bf R}^2} \theta |\nabla|^{-1} \theta\ dx

(a law previously observed for this equation in the abovementioned paper of Constantin, Majda, and Tabak). As {A} commutes with rigid motions, we also have (formally, at least) conservation of momentum

\displaystyle  \int_{{\bf R}^2} Au\ dx

(which up to trivial transformations is also expressible in impulse form as {\int_{{\bf R}^2} \theta x\ dx}, after integration by parts), and conservation of angular momentum

\displaystyle  \int_{{\bf R}^2} x \wedge Au\ dx

(which up to trivial transformations is {\int_{{\bf R}^2} \theta |x|^2\ dx}). Finally, diffeomorphism invariance gives pointwise conservation of Lagrangian vorticity {\Phi^* \theta}, thus {\theta} is transported by the flow (which is also evident from (3). In particular, all integrals of the form {\int F(\theta)\ dx} for a fixed function {F} are conserved by the flow.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,430 other followers