You are currently browsing the monthly archive for March 2011.

I’ve just finished writing the first draft of my third book coming out of the 2010 blog posts, namely “Higher order Fourier analysis“, which was based primarily on my graduate course in the topic, though it also contains material from some additional posts related to linear and higher order Fourier analysis on the blog.  It is available online here.  As usual, comments and corrections are welcome.  There is also a stub page for the book, which at present does not contain much more than the above link.

 

Last year, Emmanuel Breuillard, Ben Green, Bob Guralnick, and I wrote a paper entitled “Strongly dense free subgroups of semisimple Lie groups“. The main theorem in that paper asserted that given any semisimple algebraic group {G(k)} over an uncountable algebraically closed field {k}, there existed a free subgroup {\Gamma} which was strongly dense in the sense that any non-abelian subgroup of {\Gamma} was Zariski dense in {G(k)}. This type of result is useful for establishing expansion in finite simple groups of Lie type, as we will discuss in a subsequent paper.

An essentially equivalent formulation of the main result is that if {w_1, w_2 \in F_2} are two non-commuting elements of the free group {F_2} on two generators, and {(a, b)} is a generic pair of elements in {G(k) \times G(k)}, then {w_1(a,b)} and {w_2(a,b)} are not contained in any proper closed algebraic subgroup {H} of {G(k)}. Here, “generic” means “outside of at most countably many proper subvarieties”. In most cases, one expects that if {(a, b)} are generically drawn from {G(k) \times G(k)}, then {(w_1(a,b), w_2(a,b))} will also be generically drawn from {G(k) \times G(k)}, but this is not always the case, which is a key source of difficulty in the paper. For instance, if {w_2} is conjugate to {w_1} in {F_2}, then {w_1(a,b)} and {w_2(a,b)} must be conjugate in {G(k)} and so the pair {(w_1(a,b), w_2(a,b))} lie in a proper subvariety of {G(k) \times G(k)}. It is currently an open question to determine all the pairs {w_1, w_2} of words for which {(w_1(a,b), w_2(a,b))} is not generic for generic {a,b} (or equivalently, the double word map {(a,b) \mapsto (w_1(a,b),w_2(a,b))} is not dominant).

The main strategy of proof was as follows. It is not difficult to reduce to the case when {G} is simple. Suppose for contradiction that we could find two non-commuting words {w_1, w_2} such that {w_1(a,b), w_2(a,b)} were generically trapped in a proper closed algebraic subgroup. As it turns out, there are only finitely many conjugacy classes of such groups, and so one can assume that {w_1(a,b), w_2(a,b)} were generically trapped in a conjugate {H^g} of a fixed proper closed algebraic subgroup {H}. One can show that {w_1(a,b)}, {w_2(a,b)}, and {[w_1(a,b),w_2(a,b)]} are generically regular semisimple, which implies that {H} is a maximal rank semisimple subgroup. The key step was then to find another proper semisimple subgroup {H'} of {G} which was not a degeneration of {H}, by which we mean that there did not exist a pair {(x,y)} in the Zariski closure {\overline{\bigcup_{g \in G} H^g \times H^g}} of the products of conjugates of {H}, such that {x, y} generated a Zariski-dense subgroup of {H'}. This is enough to establish the theorem, because we could use an induction hypothesis to find {a,b} in {H'} (and hence in {G(k)} such that {w_1(a,b), w_2(a,b)} generated a Zariski-dense subgroup of {H'}, which contradicts the hypothesis that {(w_1(a,b),w_2(a,b))} was trapped in {\bigcup_{g \in G} H^g \times H^g} for generic {(a,b)} (and hence in {\overline{\bigcup_{g \in G} H^g \times H^g}} for all {(a,b)}.

To illustrate the concept of a degeneration, take {G(k) = SO(5)} and let {H = SO(3) \times SO(2)} be the stabiliser of a non-degenerate {2}-space in {k^5}. All other stabilisers of non-degenerate {2}-spaces are conjugate to {H}. However, stabilisers of degenerate {2}-spaces are not conjugate to {H}, but are still degenerations of {H}. For instance, the stabiliser of a totally singular {2}-space (which is isomorphic to the affine group on {k^2}, extended by {k}) is a degeneration of {H}.

A significant portion of the paper was then devoted to verifying that for each simple algebraic group {G}, and each maximal rank proper semisimple subgroup {H} of {G}, one could find another proper semisimple subgroup {H'} which was not a degeneration of {H}; roughly speaking, this means that {H'} is so “different” from {H} that no conjugate of {H} can come close to covering {H'}. This required using the standard classification of algebraic groups via Dynkin diagrams, and knowledge of the various semisimple subgroups of these groups and their representations (as we used the latter as obstructions to degeneration, for instance one can show that a reducible representation cannot degenerate to an irreducible one).

During the refereeing process for this paper, we discovered that there was precisely one family of simple algebraic groups for which this strategy did not actually work, namely the group {G = Sp(4) = Spin(5)} (or the group {SO(5)} that is double-covered by this group) in characteristic {3}. This group (which has Dynkin diagram {B_2=C_2}, as discussed in this previous post) has one maximal rank proper semisimple subgroup up to conjugacy, namely {SO(4)}, which is the stabiliser of a line in {k^5}. To find a proper semisimple group {H'} that is not a degeneration of this group, we basically need to find a subgroup {H'} that does not stabilise any line in {k^5}. In characteristic larger than three (or characteristic zero), one can proceed by using the action of {SL_2(k)} on the five-dimensional space {\hbox{Sym}^4(k^2)} of homogeneous degree four polynomials on {k^2}, which preserves a non-degenerate symmetric form (the four-fold tensor power of the area form on {k^2}) and thus embeds into {SO(5)}; as no polynomial is fixed by all of {SL_2(k)}, we see that this copy of {SL_2(k)} is not a degeneration of {H}.

Unfortunately, in characteristics two and three, the symmetric form on {\hbox{Sym}^4(k^2)} degenerates, and this embedding is lost. In the characteristic two case, one can proceed by using the characteristic {2} fact that {SO(5)} is isomorphic to {Sp(4)} (because in characteristic two, the space of null vectors is a hyperplane, and the symmetric form becomes symplectic on this hyperplane), and thus has an additional maximal rank proper semisimple subgroup {Sp(2) \times Sp(2)} which is not conjugate to the {SO(4)} subgroup. But in characteristic three, it turns out that there are no further semisimple subgroups of {SO(5)} that are not already contained in a conjugate of the {SO(4)}. (This is not a difficulty for larger groups such as {SO(6)} or {SO(7)}, where there are plenty of other semisimple groups to utilise; it is only this smallish group {SO(5)} that has the misfortune of having exactly one maximal rank proper semisimple group to play with, and not enough other semisimples lying around in characteristic three.)

As a consequence of this issue, our argument does not actually work in the case when the characteristic is three and the semisimple group {G} contains a copy of {SO(5)} (or {Sp(4)}), and we have had to modify our paper to delete this case from our results. We believe that such groups still do contain strongly dense free subgroups, but this appears to be just out of reach of our current method.

One thing that this experience has taught me is that algebraic groups behave somewhat pathologically in low characteristic; in particular, intuition coming from the characteristic zero case can become unreliable in characteristic two or three.

Classically, the fundamental object of study in algebraic geometry is the solution set

\displaystyle  V = V_{P_1,\ldots,P_m} := \{ (x_1,\ldots,x_d) \in k^d: P_1(x_1,\ldots,x_d) = \ldots \ \ \ \ \ (1)

\displaystyle  = P_m(x_1,\ldots,x_d) = 0 \}

to multiple algebraic equations

\displaystyle  P_1(x_1,\ldots,x_d) = \ldots = P_m(x_1,\ldots,x_d) = 0

in multiple unknowns {x_1,\ldots,x_d} in a field {k}, where the {P_1,\ldots,P_m: k^d \rightarrow k} are polynomials of various degrees {D_1, \ldots, D_m}. We adopt the classical perspective of viewing {V} as a set (and specifically, as an algebraic set), rather than as a scheme. Without loss of generality we may order the degrees in non-increasing order:

\displaystyle  D_1 \geq D_2 \geq \ldots \geq D_m \geq 1.

We can distinguish between the underdetermined case {m < d}, when there are more unknowns than equations; the determined case {m=d} when there are exactly as many unknowns as equations; and the overdetermined case {m>d}, when there are more equations than unknowns.

Experience has shown that the theory of such equations is significantly simpler if one assumes that the underlying field {k} is algebraically closed , and so we shall make this assumption throughout the rest of this post. In particular, this covers the important case when {k={\bf C}} is the field of complex numbers (but it does not cover the case {k={\bf R}} of real numbers – see below).

From the general “soft” theory of algebraic geometry, we know that the algebraic set {V} is a union of finitely many algebraic varieties, each of dimension at least {d-m}, with none of these components contained in any other. In particular, in the underdetermined case {m<d}, there are no zero-dimensional components of {V}, and thus {V} is either empty or infinite.

Now we turn to the determined case {d=m}, where we expect the solution set {V} to be zero-dimensional and thus finite. Here, the basic control on the solution set is given by Bezout’s theorem. In our notation, this theorem states the following:

Theorem 1 (Bezout’s theorem) Let {d=m=2}. If {V} is finite, then it has cardinality at most {D_1 D_2}.

This result can be found in any introductory algebraic geometry textbook; it can for instance be proven using the classical tool of resultants. The solution set {V} will be finite when the two polynomials {P_1,P_2} are coprime, but can (and will) be infinite if {P_1,P_2} share a non-trivial common factor.

By defining the right notion of multiplicity on {V} (and adopting a suitably “scheme-theoretic” viewpoint), and working in projective space rather than affine space, one can make the inequality {|V| \leq D_1 D_2} an equality. However, for many applications (and in particular, for the applications to combinatorial incidence geometry), the upper bound usually suffices.

Bezout’s theorem can be generalised in a number of ways. For instance, the restriction on the finiteness of the solution set {V} can be dropped by restricting attention to {V^0}, the union of the zero-dimensional irreducible components of {V}:

Corollary 2 (Bezout’s theorem, again) Let {d=m=2}. Then {V^0} has cardinality at most {D_1 D_2}.

Proof: We factor {P_1, P_2} into irreducible factors (using unique factorisation of polynomials). By removing repeated factors, we may assume {P_1, P_2} are square-free. We then write {P_1 = Q_1 R}, {P_2 = Q_2 R} where {R} is the greatest common divisor of {P_1,P_2} and {Q_1,Q_2} are coprime. Observe that the zero-dimensional component of {\{ P_1=P_2 = 0\}} is contained in {\{ Q_1 = Q_2 = 0\}}, which is finite from the coprimality of {Q_1,Q_2}. The claim follows. \Box

It is also not difficult to use Bezout’s theorem to handle the overdetermined case {m > d=2} in the plane:

Corollary 3 (Bezout’s theorem, yet again) Let {m \geq d=2}. Then {V^0} has cardinality at most {D_1 D_2}.

Proof: We may assume all the {P_i} are square-free. We write {P_1 = Q_2 R_2}, where {Q_2} is coprime to {P_2} and {R_2} divides {P_2} (and also {P_1}). We then write {R_2 = Q_3 R_3}, where {Q_3} is coprime to {P_3} and {R_3} divides {P_3} (and also {P_1,P_2}). Continuing in this fashion we obtain a factorisation {P_1 = Q_2 Q_3 \ldots Q_m R_m}. One then observes that {V^0} is contained in the set {\bigcup_{i=2}^m \{ Q_i = P_i = 0 \}}, which by Theorem 1 has cardinality at most {\sum_{i=2}^m \hbox{deg}(Q_i) D_i}. Since {D_i \leq D_2} and {\sum_{i=2}^m \hbox{deg}(Q_i) \leq \hbox{deg}(P_1) = D_1}, the claim follows. \Box

Remark 1 Of course, in the overdetermined case {m>d} one generically expects the solution set to be empty, but if there is enough degeneracy or numerical coincidence then non-zero solution sets can occur. In particular, by considering the case when {P_2=\ldots=P_m} and {D_2=\ldots=D_m} we see that the bound {D_1 D_2} can be sharp in some cases. However, one can do a little better in this situation; by decomposing {P_m} into irreducible components, for instance, one can improve the upper bound of {D_1 D_2} slightly to {D_1 D_m}. However, this improvement seems harder to obtain in higher dimensions (see below).

Bezout’s theorem also extends to higher dimensions. Indeed, we have

Theorem 4 (Higher-dimensional Bezout’s theorem) Let {d=m \geq 0}. If {V} is finite, then it has cardinality at most {D_1 \ldots D_d}.

This is a standard fact, and can for instance be proved from the more general and powerful machinery of intersection theory. A typical application of this theorem is to show that, given a degree {D} polynomial {P: {\bf R}^d \rightarrow {\bf R}} over the reals, the number of connected components of {\{ x \in {\bf R}^d: P(x) \neq 0 \}} is {O(D^d)}. The main idea of the proof is to locate a critical point {\nabla P(x) = 0} inside each connected component, and use Bezout’s theorem to count the number of zeroes of the polynomial map {\nabla P: {\bf R}^d \rightarrow {\bf R}^d}. (This doesn’t quite work directly because some components may be unbounded, and because the fibre of {\nabla P} at the origin may contain positive-dimensional components, but one can use truncation and generic perturbation to deal with these issues; see my recent paper with Solymosi for further discussion.)

Bezout’s theorem can be extended to the overdetermined case as before:

Theorem 5 (Bezout’s inequality) Let {m \geq d \geq 0}. Then {V^0} has cardinality at most {D_1 \ldots D_d}.

Remark 2 Theorem 5 ostensibly only controls the zero-dimensional components of {V}, but by throwing in a few generic affine-linear forms to the set of polynomials {P_1,\ldots,P_m} (thus intersecting {V} with a bunch of generic hyperplanes) we can also control the total degree of all the {i}-dimensional components of {V} for any fixed {i}. (Again, by using intersection theory one can get a slightly more precise bound than this, but the proof of that bound is more complicated than the arguments given here.)

This time, though, it is a slightly non-trivial matter to deduce Theorem 5 from Theorem 4, due to the standard difficulty that the intersection of irreducible varieties need not be irreducible (which can be viewed in some ways as the source of many other related difficulties, such as the fact that not every algebraic variety is a complete intersection), and so one cannot evade all irreducibility issues merely by assuming that the original polynomials {P_i} are irreducible. Theorem 5 first appeared explicitly in the work of Heintz.

As before, the most systematic way to establish Theorem 5 is via intersection theory. In this post, though, I would like to give a slightly more elementary argument (essentially due to Schmid), based on generically perturbing the polynomials {P_1,\ldots,P_m} in the problem ; this method is less powerful than the intersection-theoretic methods, which can be used for a wider range of algebraic geometry problems, but suffices for the narrow objective of proving Theorem 5. The argument requires some of the “soft” or “qualitative” theory of algebraic geometry (in particular, one needs to understand the semicontinuity properties of preimages of dominant maps), as well as basic linear algebra. As such, the proof is not completely elementary, but it uses only a small amount of algebraic machinery, and as such I found it easier to understand than the intersection theory arguments.

Theorem 5 is a statement about arbitrary polynomials {P_1,\ldots,P_m}. However, it turns out (in the determined case {m=d}, at least) that upper bounds on {|V^0|} are Zariski closed properties, and so it will suffice to establish this claim for generic polynomials {P_1,\ldots,P_m}. On the other hand, it is possible to use duality to deduce such upper bounds on {|V^0|} from a Zariski open condition, namely that a certain collection of polynomials are linearly independent. As such, to verify the generic case of this open condition, it suffices to establish this condition for a single family of polynomials, such as a family of monomials, in which case the condition can be verified by direct inspection. Thus we see an example of the somewhat strange strategy of establishing the general case from a specific one, using the generic case as an intermediate step.

Remark 3 There is an important caveat to note here, which is that these theorems only hold for algebraically closed fields, and in particular can fail over the reals {{\bf R}}. For instance, in {{\bf R}^3}, the polynomials

\displaystyle  P(x,y,z) = z

\displaystyle  Q(x,y,z) = z

\displaystyle  R(x,y,z) = (\prod_{j=1}^N (x-j))^2 + (\prod_{j=1}^N (y-j))^2

have degrees {1, 1, 2N} respectively, but their common zero locus {\{ (x,y,0): x,y \in \{1,\ldots,N\}\}} has cardinality {N^2}. In some cases one can safely obtain incidence bounds in {{\bf R}} by embedding {{\bf R}} inside {{\bf C}}, but as the above example shows, one needs to be careful when doing so.

Read the rest of this entry »

Jozsef Solymosi and I have just uploaded to the arXiv our paper “An incidence theorem in higher dimensions“, submitted to Discrete and Computational Geometry. In this paper we use the polynomial Ham Sandwich method of Guth and Katz (as discussed previously on this blog) to establish higher-dimensional versions of the Szemerédi-Trotter theorem, albeit at the cost of an epsilon loss in exponents.

Recall that the Szemerédi-Trotter theorem asserts that given any finite set of points {P} and lines {L} in the plane {{\bf R}^2}, the number of incidences {I(P,L) := \{ (p,\ell) \in P \times L: p \in \ell \}} has the upper bound

\displaystyle  |I(P,L)| \ll |P|^{2/3} |L|^{2/3} + |P| + |L|.

Apart from the constant factor, this bound is sharp. As discussed in this previous blog post, this theorem can be proven by the polynomial method, and the strategy can be rapidly summarised as follows. Select a parameter {D \geq 1}. By the polynomial Ham Sandwich theorem, one can divide {{\bf R}^2} into {O(D^2)} cell interiors, each with {O(|P|/D^2)} points, and incident to {O(|L|/D)} lines on the average, plus a boundary set which is an algebraic curve of degree {O(D)}. To handle the contribution of each cell interior, one uses a more elementary incidence bound (such as the bound {|I(P,L)| \ll |P| |L|^{1/2} + |L|} coming from the fact that two points determine at most one line); to handle the contribution on the cell boundary, one uses algebraic geometry tools such as Bezout’s theorem. One then combines all the bounds and optimises in {D} to obtain the result.

As a general rule, the contribution of the cell interiors becomes easier to handle as {D} increases, while the contribution of the cell boundaries become easier as {D} decreases. As such, the optimal value of {D} is often an intermediate one (in the case of Szemerédi-Trotter, the choice {D = |P|^{2/3} |L|^{-1/3}} is typical). Among other things, this requires some control of moderately high degree algebraic sets, though in this planar case {{\bf R}^2}, one only needs to control algebraic curves in the plane, which are very well understood.

In higher dimensions, though, the complexity of the algebraic geometry required to control medium degree algebraic sets increases sharply; compare for instance the algebraic geometry of ruled surfaces appearing in the three-dimensional work of Guth and Katz as discussed here, compared with the algebraic geometry of curves in the two-dimensional Szemerédi-Trotter theorem discussed here.

However, Jozsef and I discovered that it is also possible to use the polynomial method with a non-optimised value of {D}, and in particular with a bounded value {D=O(1)} of {D}, which makes the algebraic geometry treatment of the boundary significantly easier. The drawback to this is that the cell interiors can no longer be adequately controlled by trivial incidence estimates. However, if instead one controls the cell interiors by an induction hypothesis, then it turns out that in many cases one can recover a good estimate. For instance, let us consider the two-dimensional Szemerédi-Trotter theorem in the most interesting regime, namely when the {|P|^{2/3} |L|^{2/3}} term on the RHS is dominant. If we perform the cell decomposition with some small parameter {D}, then we obtain {O(D^2)} cells with {O(|P|/D^2)} points and {O(|L|/D)} lines on the average; applying the Szemerédi-Trotter theorem inductively to each of these cells, we end up with a total contribution of

\displaystyle  O( D^2 ) \times O( |P|/D^2)^{2/3} \times O( |L|/D)^{2/3} = O( |P|^{2/3} |L|^{2/3} )

for the cell interiors, and so provided that one can also control incidences on the (low degree) cell boundary, we see that we have closed the induction (up to changes in the implied constants in the {O()} notation).

Unfortunately, as is well known, the fact that the implied constants in the {O()} notation degrade when we do this prevents this induction argument from being rigorous. However, it turns out this method does work nicely to give the weaker incidence bound

\displaystyle  |I(P,L)| \ll_\epsilon |P|^{2/3+\epsilon} |L|^{2/3} + |P| + |L|

for any fixed {\epsilon > 0}; the point being that this extra epsilon of room in the exponents means that the induction hypothesis gains a factor of {D^{-\epsilon}} or so over the desired conclusion, allowing one to close the induction with no loss of implied constants if all the parameters are chosen properly. While this bound is weaker than the full Szemerédi-Trotter theorem, the argument is simpler in the sense that one only needs to understand bounded degree algebraic sets, rather than medium degree algebraic sets. As such, this argument is easier to extend to higher dimensions.

Indeed, as a consequence of this strategy (and also a generic projection argument to cut down the ambient dimension, as used in this previous paper with Ellenberg and Oberlin, and an induction on dimension), we obtain in our paper a higher-dimensional incidence theorem:

Theorem 1 (Higher-dimensional Szemerédi-Trotter theorem) Let {d \geq 2k}, and let {P} and {L} be finite sets of points and {k}-planes in {{\bf R}^d}, such that any two {k}-planes in {L} intersect in at most one point. Then we have {|I(P,L)| \ll_{k,\epsilon} |P|^{2/3+\epsilon} |L|^{2/3} + |P| + |L|} for any {\epsilon > 0}.

Something like the transversality hypothesis that any two {k}-planes in {L} intersect in at most one point is necessary, as can be seen by considering the example in which the points {P} are all collinear, and the {k}-planes in {L} all contain the common line. As one particular consequence of this result, we recover (except for an epsilon loss in exponents) the complex Szemerédi-Trotter theorem of Toth, whose proof was significantly more complicated; it also gives some matrix and quaternionic analogues, in particular giving new sum-product theorems in these rings. Note that the condition {d \geq 2k} is natural, because when the ambient dimension is less than {2k}, it is not possible for two {k}-planes to intersect in just one point. Using the generic projection trick, one can easily reduce to the critical case {d=2k}.

Actually, for inductive reasons, we prove a more general result than Theorem 1, in which the {k}-planes are replaced instead by {k}-dimensional real algebraic varieties. The transversality condition is now that whenever a point {p} is incident to two such varieties {\ell, \ell'}, that the tangent cones of {\ell} and {\ell'} at {p} only intersect at {p}. (Also for technical reasons, it is convenient to consider a partial subset {{\mathcal I}} of incidences in {I(P,L)}, rather than the full set of incidences. Indeed, it seems most natural to consider the triplet

\displaystyle \begin{array}{ccccc} \\ && {\mathcal I} && \\ & \swarrow & & \searrow & \\ P & & & & L \end{array}

as a single diagram, rather than to consider just the sets {P} and {L}.)

The reason for working in this greater level of generality is that it becomes much easier to use an induction hypothesis to deal with the cell boundary; one simply intersects each variety {\ell} in {L} with the cell boundary, which usually lowers the dimension of the variety by one. (There are some exceptional cases in which {\ell} is completely trapped inside the boundary, but due to the transversality hypothesis, this cannot contribute many incidences if the ambient dimension is {2k} (so the cell boundary is only {2k-1} dimensional).)

As one application of this more general incidence result, we almost extend a classic result of Spencer, Szemerédi, and Trotter asserting that {n} points in the plane {{\bf R}^2} determine {O(n^{4/3})} unit distances from {{\bf R}^2} to {{\bf C}^2}, at the cost of degrading {O(n^{4/3})} to {O_\epsilon(n^{4/3+\epsilon})}.

Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Universality of eigenvectors“, submitted to Random Matrices: Theory and Applications. This paper concerns an extension of our four moment theorem for eigenvalues. Roughly speaking, that four moment theorem asserts (under mild decay conditions on the coefficients of the random matrix) that the fine-scale structure of individual eigenvalues of a Wigner random matrix depend only on the first four moments of each of the entries.

In this paper, we extend this result from eigenvalues to eigenvectors, and specifically to the coefficients of, say, the {i^{th}} eigenvector {u_i(M_n)} of a Wigner random matrix {M_n}. Roughly speaking, the main result is that the distribution of these coefficients also only depends on the first four moments of each of the entries. In particular, as the distribution of coefficients eigenvectors of invariant ensembles such as GOE or GUE are known to be asymptotically gaussian real (in the GOE case) or gaussian complex (in the GUE case), the same asymptotic automatically holds for Wigner matrices whose coefficients match GOE or GUE to fourth order.

(A technical point here: strictly speaking, the eigenvectors {u_i(M_n)} are only determined up to a phase, even when the eigenvalues are simple. So, to phrase the question properly, one has to perform some sort of normalisation, for instance by working with the coefficients of the spectral projection operators {P_i(M_n) := u_i(M_n) u_i(M_n)^*} instead of the eigenvectors, or rotating each eigenvector by a random phase, or by fixing the first component of each eigenvector to be positive real. This is a fairly minor technical issue here, though, and will not be discussed further.)

This theorem strengthens a four moment theorem for eigenvectors recently established by Knowles and Yin (by a somewhat different method), in that the hypotheses are weaker (no level repulsion assumption is required, and the matrix entries only need to obey a finite moment condition rather than an exponential decay condition), and a slightly stronger conclusion (less regularity is needed on the test function, and one can handle the joint distribution of polynomially many coefficients, rather than boundedly many coefficients). On the other hand, the Knowles-Yin paper can also handle generalised Wigner ensembles in which the variances of the entries are allowed to fluctuate somewhat.

The method used here is a variation of that in our original paper (incorporating the subsequent improvements to extend the four moment theorem from the bulk to the edge, and to replace exponential decay by a finite moment condition). That method was ultimately based on the observation that if one swapped a single entry (and its adjoint) in a Wigner random matrix, then an individual eigenvalue {\lambda_i(M_n)} would not fluctuate much as a consequence (as long as one had already truncated away the event of an unexpectedly small eigenvalue gap). The same analysis shows that the projection matrices {P_i(M_n)} obeys the same stability property.

As an application of the eigenvalue four moment theorem, we establish a four moment theorem for the coefficients of resolvent matrices {(\frac{1}{\sqrt{n}} M_n - zI)^{-1}}, even when {z} is on the real axis (though in that case we need to make a level repulsion hypothesis, which has been already verified in many important special cases and is likely to be true in general). This improves on an earlier four moment theorem for resolvents of Erdos, Yau, and Yin, which required {z} to stay some distance away from the real axis (specifically, that {\hbox{Im}(z) \geq n^{-1-c}} for some small {c>0}).

As we are all now very much aware, tsunamis are water waves that start in the deep ocean, usually because of an underwater earthquake (though tsunamis can also be caused by underwater landslides or volcanoes), and then propagate towards shore. Initially, tsunamis have relatively small amplitude (a metre or so is typical), which would seem to render them as harmless as wind waves. And indeed, tsunamis often pass by ships in deep ocean without anyone on board even noticing.

However, being generated by an event as large as an earthquake, the wavelength of the tsunami is huge – 200 kilometres is typical (in contrast with wind waves, whose wavelengths are typically closer to 100 metres). In particular, the wavelength of the tsunami is far greater than the depth of the ocean (which is typically 2-3 kilometres). As such, even in the deep ocean, the dynamics of tsunamis are essentially governed by the shallow water equations. One consequence of these equations is that the speed of propagation {v} of a tsunami can be approximated by the formula

\displaystyle  v \approx \sqrt{g b} \ \ \ \ \ (1)

where {b} is the depth of the ocean, and {g \approx 9.8 ms^{-2}} is the force of gravity. As such, tsunamis in deep water move very fast – speeds such as 500 kilometres per hour (300 miles per hour) are quite typical; enough to travel from Japan to the US, for instance, in less than a day. Ultimately, this is due to the incompressibility of water (and conservation of mass); the massive net pressure (or more precisely, spatial variations in this pressure) of a very broad and deep wave of water forces the profile of the wave to move horizontally at vast speeds. (Note though that this is the phase velocity of the tsunami wave, and not the velocity of the water molecues themselves, which are far slower.)

As the tsunami approaches shore, the depth {b} of course decreases, causing the tsunami to slow down, at a rate proportional to the square root of the depth, as per (1). Unfortunately, wave shoaling then forces the amplitude {A} to increase at an inverse rate governed by Green’s law,

\displaystyle  A \propto \frac{1}{b^{1/4}} \ \ \ \ \ (2)

at least until the amplitude becomes comparable to the water depth (at which point the assumptions that underlie the above approximate results break down; also, in two (horizontal) spatial dimensions there will be some decay of amplitude as the tsunami spreads outwards). If one starts with a tsunami whose initial amplitude was {A_0} at depth {b_0} and computes the point at which the amplitude {A} and depth {b} become comparable using the proportionality relationship (2), some high school algebra then reveals that at this point, amplitude of a tsunami (and the depth of the water) is about {A_0^{4/5} b_0^{1/5}}. Thus, for instance, a tsunami with initial amplitude of one metre at a depth of 2 kilometres can end up with a final amplitude of about 5 metres near shore, while still traveling at about ten metres per second (35 kilometres per hour, or 22 miles per hour), and we have all now seen the impact that can have when it hits shore.

While tsunamis are far too massive of an event to be able to control (at least in the deep ocean), we can at least model them mathematically, allowing one to predict their impact at various places along the coast with high accuracy. (For instance, here is a video of the NOAA’s model of the March 11 tsunami, which has matched up very well with subsequent measurements.) The full equations and numerical methods used to perform such models are somewhat sophisticated, but by making a large number of simplifying assumptions, it is relatively easy to come up with a rough model that already predicts the basic features of tsunami propagation, such as the velocity formula (1) and the amplitude proportionality law (2). I give this (standard) derivation below the fold. The argument will largely be heuristic in nature; there are very interesting analytic issues in actually justifying many of the steps below rigorously, but I will not discuss these matters here.

Read the rest of this entry »

A few days ago, I received the sad news that Yahya Ould Hamidoune had recently died. Hamidoune worked in additive combinatorics, and had recently solved a question on noncommutative Freiman-Kneser theorems posed by myself on this blog last year. Namely, Hamidoune showed

Theorem 1 (Noncommutative Freiman-Kneser theorem for small doubling) Let {0 < \epsilon \leq 1}, and let {S \subset G} be a finite non-empty subset of a multiplicative group {G} such that {|A \cdot S| \leq (2-\epsilon) |S|} for some finite set {A} of cardinality {|A|} at least {|S|}, where {A \cdot S := \{ as: a \in A, s \in S \}} is the product set of {A} and {S}. Then there exists a finite subgroup {H} of {G} with cardinality {|H| \leq C(\epsilon) |S|}, such that {S} is covered by at most {C'(\epsilon)} right-cosets {H \cdot x} of {H}, where {C(\epsilon), C'(\epsilon) > 0} depend only on {\epsilon}.

One can of course specialise here to the case {A=S}, and view this theorem as a classification of those sets {S} of doubling constant at most {2-\epsilon}.

In fact Hamidoune’s argument, which is completely elementary, gives the very nice explicit constants {C(\epsilon) := \frac{2}{\epsilon}} and {C'(\epsilon) := \frac{2}{\epsilon} - 1}, which are essentially optimal except for factors of {2} (as can be seen by considering an arithmetic progression in an additive group). This result was also independently established (in the {A=S} case) by Tom Sanders (unpublished) by a more Fourier-analytic method, in particular drawing on Sanders’ deep results on the Wiener algebra {A(G)} on arbitrary non-commutative groups {G}.

This type of result had previously been known when {2-\epsilon} was less than the golden ratio {\frac{1+\sqrt{5}}{2}}, as first observed by Freiman; see my previous blog post for more discussion.

Theorem 1 is not, strictly speaking, contained in Hamidoune’s paper, but can be extracted from his arguments, which share some similarity with the recent simple proof of the Ruzsa-Plünnecke inequality by Petridis (as discussed by Tim Gowers here), and this is what I would like to do below the fold. I also include (with permission) Sanders’ unpublished argument, which proceeds instead by Fourier-analytic methods. Read the rest of this entry »

For sake of concreteness we will work here over the complex numbers {{\bf C}}, although most of this discussion is valid for arbitrary algebraically closed fields (but some care needs to be taken in characteristic {2}, as always, particularly when defining the orthogonal and symplectic groups). Then one has the following four infinite families of classical Lie groups for {n \geq 1}:

  1. (Type {A_n}) The special linear group {SL_{n+1}({\bf C})} of volume-preserving linear maps {T: {\bf C}^{n+1} \rightarrow {\bf C}^{n+1}}.
  2. (Type {B_n}) The special orthogonal group {SO_{2n+1}({\bf C})} of (orientation preserving) linear maps {T: {\bf C}^{2n+1} \rightarrow {\bf C}^{2n+1}} preserving a non-degenerate symmetric form {\langle, \rangle: {\bf C}^{2n+1} \times {\bf C}^{2n+1} \rightarrow {\bf C}}, such as the standard symmetric form

    \displaystyle  \langle (z_1,\ldots,z_{2n+1}), (w_1,\ldots,w_{2n+1}) \rangle := z_1 w_1 + \ldots + z_{2n+1} w_{2n+1}.

    (this is the complexification of the more familiar real special orthogonal group {SO_{2n+1}({\bf R})}).

  3. (Type {C_n}) The symplectic group {Sp_{2n}({\bf C})} of linear maps {T: {\bf C}^{2n} \rightarrow {\bf C}^{2n}} preserving a non-degenerate antisymmetric form {\omega: {\bf C}^{2n} \times {\bf C}^{2n} \rightarrow {\bf C}}, such as the standard symplectic form

    \displaystyle  \omega((z_1,\ldots,z_{2n}), (w_1,\ldots,w_{2n})) := \sum_{j=1}^n z_j w_{n+j} - z_{n+j} w_j.

  4. (Type {D_n}) The special orthogonal group {SO_{2n}({\bf C})} of (orientation preserving) linear maps {{\bf C}^{2n} \rightarrow {\bf C}^{2n}} preserving a non-degenerate symmetric form {\langle,\rangle: {\bf C}^{2n} \times {\bf C}^{2n} \rightarrow {\bf C}} (such as the standard symmetric form).

For this post I will abuse notation somewhat and identify {A_n} with {SL_{n+1}({\bf C})}, {B_n} with {SO_{2n+1}({\bf C})}, etc., although it is more accurate to say that {SL_{n+1}({\bf C})} is a Lie group of type {A_n}, etc., as there are other forms of the Lie algebras associated to {A_n, B_n, C_n, D_n} over various fields. Over a non-algebraically closed field, such as {{\bf R}}, the list of Lie groups associated with a given type can in fact get quite complicated; see for instance this list. One can also view the double covers {Spin_{2n+1}({\bf C})} and {Spin_{2n}({\bf C})} of {SO_{2n+1}({\bf C})}, {SO_{2n}({\bf C})} (i.e. the spin groups) as being of type {B_n, D_n} respectively; however, I find the spin groups less intuitive to work with than the orthogonal groups and will therefore focus more on the orthogonal model.

The reason for this subscripting is that each of the classical groups {A_n, B_n, C_n, D_n} has rank {n}, i.e. the dimension of any maximal connected abelian subgroup of simultaneously diagonalisable elements (also known as a Cartan subgroup) is {n}. For instance:

  1. (Type {A_n}) In {SL_{n+1}({\bf C})}, one Cartan subgroup is the diagonal matrices in {SL_{n+1}({\bf C})}, which has dimension {n}.
  2. (Type {B_n}) In {SO_{2n+1}({\bf C})}, all Cartan subgroups are isomorphic to {SO_2({\bf C})^n \times SO_1({\bf C})}, which has dimension {n}.
  3. (Type {C_n}) In {Sp_{2n}({\bf C})}, all Cartan subgroups are isomorphic to {SO_2({\bf C})^n \leq Sp_2({\bf C})^n \leq Sp_{2n}({\bf C})}, which has dimension {n}.
  4. (Type {D_n}) in {SO_{2n}({\bf C})}, all Cartan subgroups are isomorphic to {SO_2({\bf C})^n}, which has dimension {n}.

(This same convention also underlies the notation for the exceptional simple Lie groups {G_2, F_4, E_6, E_7, E_8}, which we will not discuss further here.)

With two exceptions, the classical Lie groups {A_n,B_n,C_n,D_n} are all simple, i.e. their Lie algebras are non-abelian and not expressible as the direct sum of smaller Lie algebras. The two exceptions are {D_1 = SO_2({\bf C})}, which is abelian (isomorphic to {{\bf C}^\times}, in fact) and thus not considered simple, and {D_2 = SO_4({\bf C})}, which turns out to “essentially” split as {A_1 \times A_1 = SL_2({\bf C}) \times SL_2({\bf C})}, in the sense that the former group is double covered by the latter (and in particular, there is an isogeny from the latter to the former, and the Lie algebras are isomorphic).

The adjoint action of a Cartan subgroup of a Lie group {G} on the Lie algebra {{\mathfrak g}} splits that algebra into weight spaces; in the case of a simple Lie group, the associated weights are organised by a Dynkin diagram. The Dynkin diagrams for {A_n, B_n, C_n, D_n} are of course well known, and can be found for instance here.

For small {n}, some of these Dynkin diagrams are isomorphic; this is a classic instance of the tongue-in-cheek strong law of small numbers, though in this case “strong law of small diagrams” would be more appropriate. These accidental isomorphisms then give rise to the exceptional isomorphisms between Lie algebras (and thence to exceptional isogenies between Lie groups). Excluding those isomorphisms involving the exceptional Lie algebras {E_n} for {n=3,4,5}, these isomorphisms are

  1. {A_1 = B_1 = C_1};
  2. {B_2 = C_2};
  3. {D_2 = A_1 \times A_1};
  4. {D_3 = A_3}.

There is also a pair of exceptional isomorphisms from (the {Spin_8} form of) {D_4} to itself, a phenomenon known as triality.

These isomorphisms are most easily seen via algebraic and combinatorial tools, such as an inspection of the Dynkin diagrams (see e.g. this Wikipedia image). However, the isomorphisms listed above can also be seen by more “geometric” means, using the basic representations of the classical Lie groups on their natural vector spaces ({{\bf C}^{n+1}, {\bf C}^{2n+1}, {\bf C}^{2n}, {\bf C}^{2n}} for {A_n, B_n, C_n, D_n} respectively) and combinations thereof (such as exterior powers). (However, I don’t know of a simple way to interpret triality geometrically; the descriptions I have seen tend to involve some algebraic manipulation of the octonions or of a Clifford algebra, in a manner that tended to obscure the geometry somewhat.) These isomorphisms are quite standard (I found them, for instance, in this book of Procesi), but it was instructive for me to work through them (as I have only recently needed to start studying algebraic group theory in earnest), and I am recording them here in case anyone else is interested.

Read the rest of this entry »

Let {G = (G,+)} be an abelian countable discrete group. A measure-preserving {G}-system {X = (X, {\mathcal X}, \mu, (T_g)_{g \in G})} (or {G}-system for short) is a probability space {(X, {\mathcal X}, \mu)}, equipped with a measure-preserving action {T_g: X \rightarrow X} of the group {G}, thus

\displaystyle  \mu( T_g(E) ) = \mu(E)

for all {E \in {\mathcal X}} and {g \in G}, and

\displaystyle  T_g T_h = T_{g+h}

for all {g, h \in G}, with {T_0} equal to the identity map. Classically, ergodic theory has focused on the cyclic case {G={\bf Z}} (in which the {T_g} are iterates of a single map {T = T_1}, with elements of {G} being interpreted as a time parameter), but one can certainly consider actions of other groups {G} also (including continuous or non-abelian groups).

A {G}-system is said to be strongly {2}-mixing, or strongly mixing for short, if one has

\displaystyle  \lim_{g \rightarrow \infty} \mu( A \cap T_g B ) = \mu(A) \mu(B)

for all {A, B \in {\mathcal X}}, where the convergence is with respect to the one-point compactification of {G} (thus, for every {\epsilon > 0}, there exists a compact (hence finite) subset {K} of {G} such that {|\mu(A \cap T_g B) - \mu(A)\mu(B)| \leq \epsilon} for all {g \not \in K}).

Similarly, we say that a {G}-system is strongly {3}-mixing if one has

\displaystyle  \lim_{g,h,h-g \rightarrow \infty} \mu( A \cap T_g B \cap T_h C ) = \mu(A) \mu(B) \mu(C)

for all {A,B,C \in {\mathcal X}}, thus for every {\epsilon > 0}, there exists a finite subset {K} of {G} such that

\displaystyle  |\mu( A \cap T_g B \cap T_h C ) - \mu(A) \mu(B) \mu(C)| \leq \epsilon

whenever {g, h, h-g} all lie outside {K}.

It is obvious that a strongly {3}-mixing system is necessarily strong {2}-mixing. In the case of {{\bf Z}}-systems, it has been an open problem for some time, due to Rohlin, whether the converse is true:

Problem 1 (Rohlin’s problem) Is every strongly mixing {{\bf Z}}-system necessarily strongly {3}-mixing?

This is a surprisingly difficult problem. In the positive direction, a routine application of the Cauchy-Schwarz inequality (via van der Corput’s inequality) shows that every strongly mixing system is weakly {3}-mixing, which roughly speaking means that {\mu(A \cap T_g B \cap T_h C)} converges to {\mu(A) \mu(B) \mu(C)} for most {g, h \in {\bf Z}}. Indeed, every weakly mixing system is in fact weakly mixing of all orders; see for instance this blog post of Carlos Matheus, or these lecture notes of myself. So the problem is to exclude the possibility of correlation between {A}, {T_g B}, and {T_h C} for a small but non-trivial number of pairs {(g,h)}.

It is also known that the answer to Rohlin’s problem is affirmative for rank one transformations (a result of Kalikow) and for shifts with purely singular continuous spectrum (a result of Host; note that strongly mixing systems cannot have any non-trivial point spectrum). Indeed, any counterexample to the problem, if it exists, is likely to be highly pathological.

In the other direction, Rohlin’s problem is known to have a negative answer for {{\bf Z}^2}-systems, by a well-known counterexample of Ledrappier which can be described as follows. One can view a {{\bf Z}^2}-system as being essentially equivalent to a stationary process {(x_{n,m})_{(n,m) \in {\bf Z}^2}} of random variables {x_{n,m}} in some range space {\Omega} indexed by {{\bf Z}^2}, with {X} being {\Omega^{{\bf Z}^2}} with the obvious shift map

\displaystyle  T_{(g,h)} (x_{n,m})_{(n,m) \in {\bf Z}^2} := (x_{n-g,m-h})_{(n,m) \in {\bf Z}^2}.

In Ledrappier’s example, the {x_{n,m}} take values in the finite field {{\bf F}_2} of two elements, and are selected at uniformly random subject to the “Pascal’s triangle” linear constraints

\displaystyle  x_{n,m} = x_{n-1,m} + x_{n,m-1}.

A routine application of the Kolmogorov extension theorem allows one to build such a process. The point is that due to the properties of Pascal’s triangle modulo {2} (known as Sierpinski’s triangle), one has

\displaystyle  x_{n,m} = x_{n-2^k,m} + x_{n,m-2^k}

for all powers of two {2^k}. This is enough to destroy strong {3}-mixing, because it shows a strong correlation between {x}, {T_{(2^k,0)} x}, and {T_{(0,2^k)} x} for arbitrarily large {k} and randomly chosen {x \in X}. On the other hand, one can still show that {x} and {T_g x} are asymptotically uncorrelated for large {g}, giving strong {2}-mixing. Unfortunately, there are significant obstructions to converting Ledrappier’s example from a {{\bf Z}^2}-system to a {{\bf Z}}-system, as pointed out by de la Rue.

In this post, I would like to record a “finite field” variant of Ledrappier’s construction, in which {{\bf Z}^2} is replaced by the function field ring {{\bf F}_3[t]}, which is a “dyadic” (or more precisely, “triadic”) model for the integers (cf. this earlier blog post of mine). In other words:

Theorem 2 There exists a {{\bf F}_3[t]}-system that is strongly {2}-mixing but not strongly {3}-mixing.

The idea is much the same as that of Ledrappier; one builds a stationary {{\bf F}_3[t]}-process {(x_n)_{n \in {\bf F}_3[t]}} in which {x_n \in {\bf F}_3} are chosen uniformly at random subject to the constraints

\displaystyle  x_n + x_{n + t^k} + x_{n + 2t^k} = 0 \ \ \ \ \ (1)

for all {n \in {\bf F}_3[t]} and all {k \geq 0}. Again, this system is manifestly not strongly {3}-mixing, but can be shown to be strongly {2}-mixing; I give details below the fold.

As I discussed in this previous post, in many cases the dyadic model serves as a good guide for the non-dyadic model. However, in this case there is a curious rigidity phenomenon that seems to prevent Ledrappier-type examples from being transferable to the one-dimensional non-dyadic setting; once one restores the Archimedean nature of the underlying group, the constraints (1) not only reinforce each other strongly, but also force so much linearity on the system that one loses the strong mixing property.

Read the rest of this entry »

In a previous blog post, I discussed the recent result of Guth and Katz obtaining a near-optimal bound on the Erdos distance problem. One of the tools used in the proof (building upon the earlier work of Elekes and Sharir) was the observation that the incidence geometry of the Euclidean group {SE(2)} of rigid motions of the plane was almost identical to that of lines in the Euclidean space {{\bf R}^3}:

Proposition 1 One can identify a (Zariski-)dense portion of {SE(2)} with {{\bf R}^3}, in such a way that for any two points {A, B} in the plane {{\bf R}^2}, the set {l_{AB} := \{ R \in SE(2): RA = B \}} of rigid motions mapping {A} to {B} forms a line in {{\bf R}^3}.

Proof: A rigid motion is either a translation or a rotation, with the latter forming a Zariski-dense subset of {SE(2)}. Identify a rotation {R} in {SE(2)} by an angle {\theta} with {|\theta| < \pi} around a point {P} with the element {(P, \cot \frac{\theta}{2})} in {{\bf R}^3}. (Note that such rotations also form a Zariski-dense subset of {SE(2)}.) Elementary trigonometry then reveals that if {R} maps {A} to {B}, then {P} lies on the perpendicular bisector of {AB}, and depends in a linear fashion on {\cot \frac{\theta}{2}} (for fixed {A,B}). The claim follows. \Box

As seen from the proof, this proposition is an easy (though ad hoc) application of elementary trigonometry, but it was still puzzling to me why such a simple parameterisation of the incidence structure of {SE(2)} was possible. Certainly it was clear from general algebraic geometry considerations that some bounded-degree algebraic description was available, but why would the {l_{AB}} be expressible as lines and not as, say, quadratic or cubic curves?

In this post I would like to record some observations arising from discussions with Jordan Ellenberg, Jozsef Solymosi, and Josh Zahl which give a more conceptual (but less elementary) derivation of the above proposition that avoids the use of ad hoc coordinate transformations such as {R \mapsto (P, \cot\frac{\theta}{2})}. The starting point is to view the Euclidean plane {{\bf R}^2} as the scaling limit of the sphere {S^2} (a fact which is familiar to all of us through the geometry of the Earth), which makes the Euclidean group {SE(2)} a scaling limit of the rotation group {SO(3)}. The latter can then be lifted to a double cover, namely the spin group {Spin(3)}. This group has a natural interpretation as the unit quaternions, which is isometric to the unit sphere {S^3}. The analogue of the lines {l_{AB}} in this setting become great circles on this sphere; applying a projective transformation, one can map {S^3} to {{\bf R}^3} (or more precisely to the projective space {{\bf P}^3}), at whichi point the great circles become lines. This gives a proof of Proposition 1.

Details of the correspondence are provided below the fold. One by-product of this analysis, incidentally, is the observation that the Guth-Katz bound {g(N) \gg N / \log N} for the Erdos distance problem in the plane {{\bf R}^2}, immediately extends with almost no modification to the sphere {S^2} as well (i.e. any {N} points in {S^2} determine {\gg N/\log N} distances), as well as to the hyperbolic plane {H^2}.

Read the rest of this entry »

Archives