You are currently browsing the category archive for the ‘math.CT’ category.

Let {k} be a field, and let {E} be a finite extension of that field; in this post we will denote such a relationship by {k \hookrightarrow E}. We say that {E} is a Galois extension of {k} if the cardinality of the automorphism group {\mathrm{Aut}(E/k)} of {E} fixing {k} is as large as it can be, namely the degree {[E:k]} of the extension. In that case, we call {\mathrm{Aut}(E/k)} the Galois group of {E} over {k} and denote it also by {\mathrm{Gal}(E/k)}. The fundamental theorem of Galois theory then gives a one-to-one correspondence (also known as the Galois correspondence) between the intermediate extensions between {E} and {k} and the subgroups of {\mathrm{Gal}(E/k)}:

Theorem 1 (Fundamental theorem of Galois theory) Let {E} be a Galois extension of {k}.

  • (i) If {k \hookrightarrow F \hookrightarrow E} is an intermediate field betwen {k} and {E}, then {E} is a Galois extension of {F}, and {\mathrm{Gal}(E/F)} is a subgroup of {\mathrm{Gal}(E/k)}.
  • (ii) Conversely, if {H} is a subgroup of {\mathrm{Gal}(E/k)}, then there is a unique intermediate field {k \hookrightarrow F \hookrightarrow E} such that {\mathrm{Gal}(E/F)=H}; namely {F} is the set of elements of {E} that are fixed by {H}.
  • (iii) If {k \hookrightarrow F_1 \hookrightarrow E} and {k \hookrightarrow F_2 \hookrightarrow E}, then {F_1 \hookrightarrow F_2} if and only if {\mathrm{Gal}(E/F_2)} is a subgroup of {\mathrm{Gal}(E/F_1)}.
  • (iv) If {k \hookrightarrow F \hookrightarrow E} is an intermediate field between {k} and {E}, then {F} is a Galois extension of {k} if and only if {\mathrm{Gal}(E/F)} is a normal subgroup of {\mathrm{Gal}(E/k)}. In that case, {\mathrm{Gal}(F/k)} is isomorphic to the quotient group {\mathrm{Gal}(E/k) / \mathrm{Gal}(E/F)}.

Example 2 Let {k= {\bf Q}}, and let {E = {\bf Q}(e^{2\pi i/n})} be the degree {\phi(n)} Galois extension formed by adjoining a primitive {n^{th}} root of unity (that is to say, {E} is the cyclotomic field of order {n}). Then {\mathrm{Gal}(E/k)} is isomorphic to the multiplicative cyclic group {({\bf Z}/n{\bf Z})^\times} (the invertible elements of the ring {{\bf Z}/n{\bf Z}}). Amongst the intermediate fields, one has the cyclotomic fields of the form {F = {\bf Q}(e^{2\pi i/m})} where {m} divides {n}; they are also Galois extensions, with {\mathrm{Gal}(F/k)} isomorphic to {({\bf Z}/m{\bf Z})^\times} and {\mathrm{Gal}(E/F)} isomorphic to the elements {a} of {({\bf Z}/n{\bf Z})^\times} such that {a(n/m) = (n/m)} modulo {n}. (There can also be other intermediate fields, corresponding to other subgroups of {({\bf Z}/n{\bf Z})^\times}.)

Example 3 Let {k = {\bf C}(z)} be the field of rational functions of one indeterminate {z} with complex coefficients, and let {E = {\bf C}(w)} be the field formed by adjoining an {n^{th}} root {w = z^{1/n}} to {k}, thus {k = {\bf C}(w^n)}. Then {E} is a degree {n} Galois extension of {k} with Galois group isomorphic to {{\bf Z}/n{\bf Z}} (with an element {a \in {\bf Z}/n{\bf Z}} corresponding to the field automorphism of {k} that sends {w} to {e^{2\pi i a/n} w}). The intermediate fields are of the form {F = {\bf C}(w^{n/m})} where {m} divides {n}; they are also Galois extensions, with {\mathrm{Gal}(F/k)} isomorphic to {{\bf Z}/m{\bf Z}} and {\mathrm{Gal}(E/F)} isomorphic to the multiples of {m} in {{\bf Z}/n{\bf Z}}.

There is an analogous Galois correspondence in the covering theory of manifolds. For simplicity we restrict attention to finite covers. If {L} is a connected manifold and {\pi_{L \leftarrow M}: M \rightarrow L} is a finite covering map of {L} by another connected manifold {M}, we denote this relationship by {L \leftarrow M}. (Later on we will change our function notations slightly and write {\pi_{L \leftarrow M}: L \leftarrow M} in place of the more traditional {\pi_{L \leftarrow M}: M \rightarrow L}, and similarly for the deck transformations {g: M \leftarrow M} below; more on this below the fold.) If {L \leftarrow M}, we can define {\mathrm{Aut}(M/L)} to be the group of deck transformations: continuous maps {g: M \rightarrow M} which preserve the fibres of {\pi}. We say that this covering map is a Galois cover if the cardinality of the group {\mathrm{Aut}(M/L)} is as large as it can be. In that case we call {\mathrm{Aut}(M/L)} the Galois group of {M} over {L} and denote it by {\mathrm{Gal}(M/L)}.

Suppose {M} is a finite cover of {L}. An intermediate cover {N} between {M} and {L} is a cover of {N} by {L}, such that {L \leftarrow N \leftarrow M}, in such a way that the covering maps are compatible, in the sense that {\pi_{L \leftarrow M}} is the composition of {\pi_{L \leftarrow N}} and {\pi_{N \leftarrow M}}. This sort of compatibilty condition will be implicitly assumed whenever we chain together multiple instances of the {\leftarrow} notation. Two intermediate covers {N,N'} are equivalent if they cover each other, in a fashion compatible with all the other covering maps, thus {L \leftarrow N \leftarrow N' \leftarrow M} and {L \leftarrow N' \leftarrow N \leftarrow M}. We then have the analogous Galois correspondence:

Theorem 4 (Fundamental theorem of covering spaces) Let {L \leftarrow M} be a Galois covering.

  • (i) If {L \leftarrow N \leftarrow M} is an intermediate cover betwen {L} and {M}, then {M} is a Galois extension of {N}, and {\mathrm{Gal}(M/N)} is a subgroup of {\mathrm{Gal}(M/L)}.
  • (ii) Conversely, if {H} is a subgroup of {\mathrm{Gal}(M/L)}, then there is a intermediate cover {L \leftarrow N \leftarrow M}, unique up to equivalence, such that {\mathrm{Gal}(M/N)=H}.
  • (iii) If {L \leftarrow N_1 \leftarrow M} and {L \leftarrow N_2 \leftarrow M}, then {L \leftarrow N_1 \leftarrow N_2 \leftarrow M} if and only if {\mathrm{Gal}(M/N_2)} is a subgroup of {\mathrm{Gal}(M/N_1)}.
  • (iv) If {L \leftarrow N \leftarrow M}, then {N} is a Galois cover of {L} if and only if {\mathrm{Gal}(M/N)} is a normal subgroup of {\mathrm{Gal}(M/L)}. In that case, {\mathrm{Gal}(N/L)} is isomorphic to the quotient group {\mathrm{Gal}(M/L) / \mathrm{Gal}(N/L)}.

Example 5 Let {L= {\bf C}^\times := {\bf C} \backslash \{0\}}, and let {M = {\bf C}^\times} be the {n}-fold cover of {L} with covering map {\pi_{L \leftarrow M}(w) := w^n}. Then {M} is a Galois cover of {L}, and {\mathrm{Gal}(M/L)} is isomorphic to the cyclic group {{\bf Z}/n{\bf Z}}. The intermediate covers are (up to equivalence) of the form {N = {\bf C}^\times} with covering map {\pi_{L \leftarrow N}(u) := u^m} where {m} divides {n}; they are also Galois covers, with {\mathrm{Gal}(N/L)} isomorphic to {{\bf Z}/m{\bf Z}} and {\mathrm{Gal}(M/N)} isomorphic to the multiples of {m} in {{\bf Z}/n{\bf Z}}.

Given the strong similarity between the two theorems, it is natural to ask if there is some more concrete connection between Galois theory and the theory of finite covers.

In one direction, if the manifolds {L,M,N} have an algebraic structure (or a complex structure), then one can relate covering spaces to field extensions by considering the field of rational functions (or meromorphic functions) on the space. For instance, if {L = {\bf C}^\times} and {z} is the coordinate on {L}, one can consider the field {{\bf C}(z)} of rational functions on {L}; the {n}-fold cover {M = {\bf C}^\times} with coordinate {w} from Example 5 similarly has a field {{\bf C}(w)} of rational functions. The covering {\pi_{L \leftarrow M}(w) = w^n} relates the two coordinates {z,w} by the relation {z = w^n}, at which point one sees that the rational functions {{\bf C}(w)} on {L} are a degree {n} extension of that of {{\bf C}(z)} (formed by adjoining the {n^{th}} root of unity {w} to {z}). In this way we see that Example 5 is in fact closely related to Example 3.

Exercise 6 What happens if one uses meromorphic functions in place of rational functions in the above example? (To answer this question, I found it convenient to use a discrete Fourier transform associated to the multiplicative action of the {n^{th}} roots of unity on {M} to decompose the meromorphic functions on {M} as a linear combination of functions invariant under this action, times a power {w^j} of the coordinate {w} for {j=0,\dots,n-1}.)

I was curious however about the reverse direction. Starting with some field extensions {k \hookrightarrow F \hookrightarrow E}, is it is possible to create manifold like spaces {M_k \leftarrow M_F \leftarrow M_E} associated to these fields in such a fashion that (say) {M_E} behaves like a “covering space” to {M_k} with a group {\mathrm{Aut}(M_E/M_k)} of deck transformations isomorphic to {\mathrm{Aut}(E/k)}, so that the Galois correspondences agree? Also, given how the notion of a path (and associated concepts such as loops, monodromy and the fundamental group) play a prominent role in the theory of covering spaces, can spaces such as {M_k} or {M_E} also come with a notion of a path that is somehow compatible with the Galois correspondence?

The standard answer from modern algebraic geometry (as articulated for instance in this nice MathOverflow answer by Minhyong Kim) is to set {M_E} equal to the spectrum {\mathrm{Spec}(E)} of the field {E}. As a set, the spectrum {\mathrm{Spec}(R)} of a commutative ring {R} is defined as the set of prime ideals of {R}. Generally speaking, the map {R \mapsto \mathrm{Spec}(R)} that maps a commutative ring to its spectrum tends to act like an inverse of the operation that maps a space {X} to a ring of functions on that space. For instance, if one considers the commutative ring {{\bf C}[z, z^{-1}]} of regular functions on {M = {\bf C}^\times}, then each point {z_0} in {M} gives rise to the prime ideal {\{ f \in {\bf C}[z, z^{-1}]: f(z_0)=0\}}, and one can check that these are the only such prime ideals (other than the zero ideal {(0)}), giving an almost one-to-one correspondence between {\mathrm{Spec}( {\bf C}[z,z^{-1}] )} and {M}. (The zero ideal corresponds instead to the generic point of {M}.)

Of course, the spectrum of a field such as {E} is just a point, as the zero ideal {(0)} is the only prime ideal. Naively, it would then seem that there is not enough space inside such a point to support a rich enough structure of paths to recover the Galois theory of this field. In modern algebraic geometry, one addresses this issue by considering not just the set-theoretic elements of {E}, but more general “base points” {p: \mathrm{Spec}(b) \rightarrow \mathrm{Spec}(E)} that map from some other (affine) scheme {\mathrm{Spec}(b)} to {\mathrm{Spec}(E)} (one could also consider non-affine base points of course). One has to rework many of the fundamentals of the subject to accommodate this “relative point of view“, for instance replacing the usual notion of topology with an Ă©tale topology, but once one does so one obtains a very satisfactory theory.

As an exercise, I set myself the task of trying to interpret Galois theory as an analogue of covering space theory in a more classical fashion, without explicit reference to more modern concepts such as schemes, spectra, or Ă©tale topology. After some experimentation, I found a reasonably satisfactory way to do so as follows. The space {M_E} that one associates with {E} in this classical perspective is not the single point {\mathrm{Spec}(E)}, but instead the much larger space consisting of ring homomorphisms {p: E \rightarrow b} from {E} to arbitrary integral domains {b}; informally, {M_E} consists of all the “models” or “representations” of {E} (in the spirit of this previous blog post). (There is a technical set-theoretic issue here because the class of integral domains {R} is a proper class, so that {M_E} will also be a proper class; I will completely ignore such technicalities in this post.) We view each such homomorphism {p: E \rightarrow b} as a single point in {M_E}. The analogous notion of a path from one point {p: E \rightarrow b} to another {p': E \rightarrow b'} is then a homomorphism {\gamma: b \rightarrow b'} of integral domains, such that {p'} is the composition of {p} with {\gamma}. Note that every prime ideal {I} in the spectrum {\mathrm{Spec}(R)} of a commutative ring {R} gives rise to a point {p_I} in the space {M_R} defined here, namely the quotient map {p_I: R \rightarrow R/I} to the ring {R/I}, which is an integral domain because {I} is prime. So one can think of {\mathrm{Spec}(R)} as being a distinguished subset of {M_R}; alternatively, one can think of {M_R} as a sort of “penumbra” surrounding {\mathrm{Spec}(R)}. In particular, when {E} is a field, {\mathrm{Spec}(E) = \{(0)\}} defines a special point {p_R} in {M_R}, namely the identity homomorphism {p_R: R \rightarrow R}.

Below the fold I would like to record this interpretation of Galois theory, by first revisiting the theory of covering spaces using paths as the basic building block, and then adapting that theory to the theory of field extensions using the spaces indicated above. This is not too far from the usual scheme-theoretic way of phrasing the connection between the two topics (basically I have replaced Ă©tale-type points {p: \mathrm{Spec}(b) \rightarrow \mathrm{Spec}(E)} with more classical points {p: E \rightarrow b}), but I had not seen it explicitly articulated before, so I am recording it here for my own benefit and for any other readers who may be interested.

Read the rest of this entry »

In the traditional foundations of probability theory, one selects a probability space {(\Omega, {\mathcal B}, {\mathbf P})}, and makes a distinction between deterministic mathematical objects, which do not depend on the sampled state {\omega \in \Omega}, and stochastic (or random) mathematical objects, which do depend (but in a measurable fashion) on the sampled state {\omega \in \Omega}. For instance, a deterministic real number would just be an element {x \in {\bf R}}, whereas a stochastic real number (or real random variable) would be a measurable function {x: \Omega \rightarrow {\bf R}}, where in this post {{\bf R}} will always be endowed with the Borel {\sigma}-algebra. (For readers familiar with nonstandard analysis, the adjectives “deterministic” and “stochastic” will be used here in a manner analogous to the uses of the adjectives “standard” and “nonstandard” in nonstandard analysis. The analogy is particularly close when comparing with the “cheap nonstandard analysis” discussed in this previous blog post. We will also use “relative to {\Omega}” as a synonym for “stochastic”.)

Actually, for our purposes we will adopt the philosophy of identifying stochastic objects that agree almost surely, so if one was to be completely precise, we should define a stochastic real number to be an equivalence class {[x]} of measurable functions {x: \Omega \rightarrow {\bf R}}, up to almost sure equivalence. However, we shall often abuse notation and write {[x]} simply as {x}.

More generally, given any measurable space {X = (X, {\mathcal X})}, we can talk either about deterministic elements {x \in X}, or about stochastic elements of {X}, that is to say equivalence classes {[x]} of measurable maps {x: \Omega \rightarrow X} up to almost sure equivalence. We will use {\Gamma(X|\Omega)} to denote the set of all stochastic elements of {X}. (For readers familiar with sheaves, it may helpful for the purposes of this post to think of {\Gamma(X|\Omega)} as the space of measurable global sections of the trivial {X}bundle over {\Omega}.) Of course every deterministic element {x} of {X} can also be viewed as a stochastic element {x|\Omega \in \Gamma(X|\Omega)} given by (the equivalence class of) the constant function {\omega \mapsto x}, thus giving an embedding of {X} into {\Gamma(X|\Omega)}. We do not attempt here to give an interpretation of {\Gamma(X|\Omega)} for sets {X} that are not equipped with a {\sigma}-algebra {{\mathcal X}}.

Remark 1 In my previous post on the foundations of probability theory, I emphasised the freedom to extend the sample space {(\Omega, {\mathcal B}, {\mathbf P})} to a larger sample space whenever one wished to inject additional sources of randomness. This is of course an important freedom to possess (and in the current formalism, is the analogue of the important operation of base change in algebraic geometry), but in this post we will focus on a single fixed sample space {(\Omega, {\mathcal B}, {\mathbf P})}, and not consider extensions of this space, so that one only has to consider two types of mathematical objects (deterministic and stochastic), as opposed to having many more such types, one for each potential choice of sample space (with the deterministic objects corresponding to the case when the sample space collapses to a point).

Any (measurable) {k}-ary operation on deterministic mathematical objects then extends to their stochastic counterparts by applying the operation pointwise. For instance, the addition operation {+: {\bf R} \times {\bf R} \rightarrow {\bf R}} on deterministic real numbers extends to an addition operation {+: \Gamma({\bf R}|\Omega) \times \Gamma({\bf R}|\Omega) \rightarrow \Gamma({\bf R}|\Omega)}, by defining the class {[x]+[y]} for {x,y: \Omega \rightarrow {\bf R}} to be the equivalence class of the function {\omega \mapsto x(\omega) + y(\omega)}; this operation is easily seen to be well-defined. More generally, any measurable {k}-ary deterministic operation {O: X_1 \times \dots \times X_k \rightarrow Y} between measurable spaces {X_1,\dots,X_k,Y} extends to an stochastic operation {O: \Gamma(X_1|\Omega) \times \dots \Gamma(X_k|\Omega) \rightarrow \Gamma(Y|\Omega)} in the obvious manner.

There is a similar story for {k}-ary relations {R: X_1 \times \dots \times X_k \rightarrow \{\hbox{true},\hbox{false}\}}, although here one has to make a distinction between a deterministic reading of the relation and a stochastic one. Namely, if we are given stochastic objects {x_i \in \Gamma(X_i|\Omega)} for {i=1,\dots,k}, the relation {R(x_1,\dots,x_k)} does not necessarily take values in the deterministic Boolean algebra {\{ \hbox{true}, \hbox{false}\}}, but only in the stochastic Boolean algebra {\Gamma(\{ \hbox{true}, \hbox{false}\}|\Omega)} – thus {R(x_1,\dots,x_k)} may be true with some positive probability and also false with some positive probability (with the event that {R(x_1,\dots,x_k)} being stochastically true being determined up to null events). Of course, the deterministic Boolean algebra embeds in the stochastic one, so we can talk about a relation {R(x_1,\dots,x_k)} being determinstically true or deterministically false, which (due to our identification of stochastic objects that agree almost surely) means that {R(x_1(\omega),\dots,x_k(\omega))} is almost surely true or almost surely false respectively. For instance given two stochastic objects {x,y}, one can view their equality relation {x=y} as having a stochastic truth value. This is distinct from the way the equality symbol {=} is used in mathematical logic, which we will now call “equality in the deterministic sense” to reduce confusion. Thus, {x=y} in the deterministic sense if and only if the stochastic truth value of {x=y} is equal to {\hbox{true}}, that is to say that {x(\omega)=y(\omega)} for almost all {\omega}.

Any universal identity for deterministic operations (or universal implication between identities) extends to their stochastic counterparts: for instance, addition is commutative, associative, and cancellative on the space of deterministic reals {{\bf R}}, and is therefore commutative, associative, and cancellative on stochastic reals {\Gamma({\bf R}|\Omega)} as well. However, one has to be more careful when working with mathematical laws that are not expressible as universal identities, or implications between identities. For instance, {{\bf R}} is an integral domain: if {x_1,x_2 \in {\bf R}} are deterministic reals such that {x_1 x_2=0}, then one must have {x_1=0} or {x_2=0}. However, if {x_1, x_2 \in \Gamma({\bf R}|\Omega)} are stochastic reals such that {x_1 x_2 = 0} (in the deterministic sense), then it is no longer necessarily the case that {x_1=0} (in the deterministic sense) or that {x_2=0} (in the deterministic sense); however, it is still true that “{x_1=0} or {x_2=0}” is true in the deterministic sense if one interprets the boolean operator “or” stochastically, thus “{x_1(\omega)=0} or {x_2(\omega)=0}” is true for almost all {\omega}. Another way to properly obtain a stochastic interpretation of the integral domain property of {{\bf R}} is to rewrite it as

\displaystyle  x_1,x_2 \in {\bf R}, x_1 x_2 = 0 \implies x_i=0 \hbox{ for some } i \in \{1,2\}

and then make all sets stochastic to obtain the true statement

\displaystyle  x_1,x_2 \in \Gamma({\bf R}|\Omega), x_1 x_2 = 0 \implies x_i=0 \hbox{ for some } i \in \Gamma(\{1,2\}|\Omega),

thus we have to allow the index {i} for which vanishing {x_i=0} occurs to also be stochastic, rather than deterministic. (A technical note: when one proves this statement, one has to select {i} in a measurable fashion; for instance, one can choose {i(\omega)} to equal {1} when {x_1(\omega)=0}, and {2} otherwise (so that in the “tie-breaking” case when {x_1(\omega)} and {x_2(\omega)} both vanish, one always selects {i(\omega)} to equal {1}).)

Similarly, the law of the excluded middle fails when interpreted deterministically, but remains true when interpreted stochastically: if {S} is a stochastic statement, then it is not necessarily the case that {S} is either deterministically true or deterministically false; however the sentence “{S} or not-{S}” is still deterministically true if the boolean operator “or” is interpreted stochastically rather than deterministically.

To avoid having to keep pointing out which operations are interpreted stochastically and which ones are interpreted deterministically, we will use the following convention: if we assert that a mathematical sentence {S} involving stochastic objects is true, then (unless otherwise specified) we mean that {S} is deterministically true, assuming that all relations used inside {S} are interpreted stochastically. For instance, if {x,y} are stochastic reals, when we assert that “Exactly one of {x < y}, {x=y}, or {x>y} is true”, then by default it is understood that the relations {<}, {=}, {>} and the boolean operator “exactly one of” are interpreted stochastically, and the assertion is that the sentence is deterministically true.

In the above discussion, the stochastic objects {x} being considered were elements of a deterministic space {X}, such as the reals {{\bf R}}. However, it can often be convenient to generalise this situation by allowing the ambient space {X} to also be stochastic. For instance, one might wish to consider a stochastic vector {v(\omega)} inside a stochastic vector space {V(\omega)}, or a stochastic edge {e} of a stochastic graph {G(\omega)}. In order to formally describe this situation within the classical framework of measure theory, one needs to place all the ambient spaces {X(\omega)} inside a measurable space. This can certainly be done in many contexts (e.g. when considering random graphs on a deterministic set of vertices, or if one is willing to work up to equivalence and place the ambient spaces inside a suitable moduli space), but is not completely natural in other contexts. For instance, if one wishes to consider stochastic vector spaces of potentially unbounded dimension (in particular, potentially larger than any given cardinal that one might specify in advance), then the class of all possible vector spaces is so large that it becomes a proper class rather than a set (even if one works up to equivalence), making it problematic to give this class the structure of a measurable space; furthermore, even once one does so, one needs to take additional care to pin down what it would mean for a random vector {\omega \mapsto v_\omega} lying in a random vector space {\omega \mapsto V_\omega} to depend “measurably” on {\omega}.

Of course, in any reasonable application one can avoid the set theoretic issues at least by various ad hoc means, for instance by restricting the dimension of all spaces involved to some fixed cardinal such as {2^{\aleph_0}}. However, the measure-theoretic issues can require some additional effort to resolve properly.

In this post I would like to describe a different way to formalise stochastic spaces, and stochastic elements of these spaces, by viewing the spaces as measure-theoretic analogue of a sheaf, but being over the probability space {\Omega} rather than over a topological space; stochastic objects are then sections of such sheaves. Actually, for minor technical reasons it is convenient to work in the slightly more general setting in which the base space {\Omega} is a finite measure space {(\Omega, {\mathcal B}, \mu)} rather than a probability space, thus {\mu(\Omega)} can take any value in {[0,+\infty)} rather than being normalised to equal {1}. This will allow us to easily localise to subevents {\Omega'} of {\Omega} without the need for normalisation, even when {\Omega'} is a null event (though we caution that the map {x \mapsto x|\Omega'} from deterministic objects {x} ceases to be injective in this latter case). We will however still continue to use probabilistic terminology. despite the lack of normalisation; thus for instance, sets {E} in {{\mathcal B}} will be referred to as events, the measure {\mu(E)} of such a set will be referred to as the probability (which is now permitted to exceed {1} in some cases), and an event whose complement is a null event shall be said to hold almost surely. It is in fact likely that almost all of the theory below extends to base spaces which are {\sigma}-finite rather than finite (for instance, by damping the measure to become finite, without introducing any further null events), although we will not pursue this further generalisation here.

The approach taken in this post is “topos-theoretic” in nature (although we will not use the language of topoi explicitly here), and is well suited to a “pointless” or “point-free” approach to probability theory, in which the role of the stochastic state {\omega \in \Omega} is suppressed as much as possible; instead, one strives to always adopt a “relative point of view”, with all objects under consideration being viewed as stochastic objects relative to the underlying base space {\Omega}. In this perspective, the stochastic version of a set is as follows.

Definition 1 (Stochastic set) Unless otherwise specified, we assume that we are given a fixed finite measure space {\Omega = (\Omega, {\mathcal B}, \mu)} (which we refer to as the base space). A stochastic set (relative to {\Omega}) is a tuple {X|\Omega = (\Gamma(X|E)_{E \in {\mathcal B}}, ((|E))_{E \subset F, E,F \in {\mathcal B}})} consisting of the following objects:

  • A set {\Gamma(X|E)} assigned to each event {E \in {\mathcal B}}; and
  • A restriction map {x \mapsto x|E} from {\Gamma(X|F)} to {\Gamma(X|E)} to each pair {E \subset F} of nested events {E,F \in {\mathcal B}}. (Strictly speaking, one should indicate the dependence on {F} in the notation for the restriction map, e.g. using {x \mapsto x|(E \leftarrow F)} instead of {x \mapsto x|E}, but we will abuse notation by omitting the {F} dependence.)

We refer to elements of {\Gamma(X|E)} as local stochastic elements of the stochastic set {X|\Omega}, localised to the event {E}, and elements of {\Gamma(X|\Omega)} as global stochastic elements (or simply elements) of the stochastic set. (In the language of sheaves, one would use “sections” instead of “elements” here, but I prefer to use the latter terminology here, for compatibility with conventional probabilistic notation, where for instance measurable maps from {\Omega} to {{\bf R}} are referred to as real random variables, rather than sections of the reals.)

Furthermore, we impose the following axioms:

  • (Category) The map {x \mapsto x|E} from {\Gamma(X|E)} to {\Gamma(X|E)} is the identity map, and if {E \subset F \subset G} are events in {{\mathcal B}}, then {((x|F)|E) = (x|E)} for all {x \in \Gamma(X|G)}.
  • (Null events trivial) If {E \in {\mathcal B}} is a null event, then the set {\Gamma(X|E)} is a singleton set. (In particular, {\Gamma(X|\emptyset)} is always a singleton set; this is analogous to the convention that {x^0=1} for any number {x}.)
  • (Countable gluing) Suppose that for each natural number {n}, one has an event {E_n \in {\mathcal B}} and an element {x_n \in \Gamma(X|E_n)} such that {x_n|(E_n \cap E_m) = x_m|(E_n \cap E_m)} for all {n,m}. Then there exists a unique {x\in \Gamma(X|\bigcup_{n=1}^\infty E_n)} such that {x_n = x|E_n} for all {n}.

If {\Omega'} is an event in {\Omega}, we define the localisation {X|\Omega'} of the stochastic set {X|\Omega} to {\Omega'} to be the stochastic set

\displaystyle X|\Omega' := (\Gamma(X|E)_{E \in {\mathcal B}; E \subset \Omega'}, ((|E))_{E \subset F \subset \Omega', E,F \in {\mathcal B}})

relative to {\Omega'}. (Note that there is no need to renormalise the measure on {\Omega'}, as we are not demanding that our base space have total measure {1}.)

The following fact is useful for actually verifying that a given object indeed has the structure of a stochastic set:

Exercise 1 Show that to verify the countable gluing axiom of a stochastic set, it suffices to do so under the additional hypothesis that the events {E_n} are disjoint. (Note that this is quite different from the situation with sheaves over a topological space, in which the analogous gluing axiom is often trivial in the disjoint case but has non-trivial content in the overlapping case. This is ultimately because a {\sigma}-algebra is closed under all Boolean operations, whereas a topology is only closed under union and intersection.)

Let us illustrate the concept of a stochastic set with some examples.

Example 1 (Discrete case) A simple case arises when {\Omega} is a discrete space which is at most countable. If we assign a set {X_\omega} to each {\omega \in \Omega}, with {X_\omega} a singleton if {\mu(\{\omega\})=0}. One then sets {\Gamma(X|E) := \prod_{\omega \in E} X_\omega}, with the obvious restriction maps, giving rise to a stochastic set {X|\Omega}. (Thus, a local element {x} of {\Gamma(X|E)} can be viewed as a map {\omega \mapsto x(\omega)} on {E} that takes values in {X_\omega} for each {\omega \in E}.) Conversely, it is not difficult to see that any stochastic set over an at most countable discrete probability space {\Omega} is of this form up to isomorphism. In this case, one can think of {X|\Omega} as a bundle of sets {X_\omega} over each point {\omega} (of positive probability) in the base space {\Omega}. One can extend this bundle interpretation of stochastic sets to reasonably nice sample spaces {\Omega} (such as standard Borel spaces) and similarly reasonable {X}; however, I would like to avoid this interpretation in the formalism below in order to be able to easily work in settings in which {\Omega} and {X} are very “large” (e.g. not separable in any reasonable sense). Note that we permit some of the {X_\omega} to be empty, thus it can be possible for {\Gamma(X|\Omega)} to be empty whilst {\Gamma(X|E)} for some strict subevents {E} of {\Omega} to be non-empty. (This is analogous to how it is possible for a sheaf to have local sections but no global sections.) As such, the space {\Gamma(X|\Omega)} of global elements does not completely determine the stochastic set {X|\Omega}; one sometimes needs to localise to an event {E} in order to see the full structure of such a set. Thus it is important to distinguish between a stochastic set {X|\Omega} and its space {\Gamma(X|\Omega)} of global elements. (As such, it is a slight abuse of the axiom of extensionality to refer to global elements of {X|\Omega} simply as “elements”, but hopefully this should not cause too much confusion.)

Example 2 (Measurable spaces as stochastic sets) Returning now to a general base space {\Omega}, any (deterministic) measurable space {X} gives rise to a stochastic set {X|\Omega}, with {\Gamma(X|E)} being defined as in previous discussion as the measurable functions from {E} to {X} modulo almost everywhere equivalence (in particular, {\Gamma(X|E)} a singleton set when {E} is null), with the usual restriction maps. The constraint of measurability on the maps {x: E \rightarrow \Omega}, together with the quotienting by almost sure equivalence, means that {\Gamma(X|E)} is now more complicated than a plain Cartesian product {\prod_{\omega \in E} X_\omega} of fibres, but this still serves as a useful first approximation to what {\Gamma(X|E)} is for the purposes of developing intuition. Indeed, the measurability constraint is so weak (as compared for instance to topological or smooth constraints in other contexts, such as sheaves of continuous or smooth sections of bundles) that the intuition of essentially independent fibres is quite an accurate one, at least if one avoids consideration of an uncountable number of objects simultaneously.

Example 3 (Extended Hilbert modules) This example is the one that motivated this post for me. Suppose that one has an extension {(\tilde \Omega, \tilde {\mathcal B}, \tilde \mu)} of the base space {(\Omega, {\mathcal B},\mu)}, thus we have a measurable factor map {\pi: \tilde \Omega \rightarrow \Omega} such that the pushforward of the measure {\tilde \mu} by {\pi} is equal to {\mu}. Then we have a conditional expectation operator {\pi_*: L^2(\tilde \Omega,\tilde {\mathcal B},\tilde \mu) \rightarrow L^2(\Omega,{\mathcal B},\mu)}, defined as the adjoint of the pullback map {\pi^*: L^2(\Omega,{\mathcal B},\mu) \rightarrow L^2(\tilde \Omega,\tilde {\mathcal B},\tilde \mu)}. As is well known, the conditional expectation operator also extends to a contraction {\pi_*: L^1(\tilde \Omega,\tilde {\mathcal B},\tilde \mu) \rightarrow L^1(\Omega,{\mathcal B}, \mu)}; by monotone convergence we may also extend {\pi_*} to a map from measurable functions from {\tilde \Omega} to the extended non-negative reals {[0,+\infty]}, to measurable functions from {\Omega} to {[0,+\infty]}. We then define the “extended Hilbert module” {L^2(\tilde \Omega|\Omega)} to be the space of functions {f \in L^2(\tilde \Omega,\tilde {\mathcal B},\tilde \mu)} with {\pi_*(|f|^2)} finite almost everywhere. This is an extended version of the Hilbert module {L^\infty_{\Omega} L^2(\tilde \Omega|\Omega)}, which is defined similarly except that {\pi_*(|f|^2)} is required to lie in {L^\infty(\Omega,{\mathcal B},\mu)}; this is a Hilbert module over {L^\infty(\Omega, {\mathcal B}, \mu)} which is of particular importance in the Furstenberg-Zimmer structure theory of measure-preserving systems. We can then define the stochastic set {L^2_\pi(\tilde \Omega)|\Omega} by setting

\displaystyle  \Gamma(L^2_\pi(\tilde \Omega)|E) := L^2( \pi^{-1}(E) | E )

with the obvious restriction maps. In the case that {\Omega,\Omega'} are standard Borel spaces, one can disintegrate {\mu'} as an integral {\mu' = \int_\Omega \nu_\omega\ d\mu(\omega)} of probability measures {\nu_\omega} (supported in the fibre {\pi^{-1}(\{\omega\})}), in which case this stochastic set can be viewed as having fibres {L^2( \tilde \Omega, \tilde {\mathcal B}, \nu_\omega )} (though if {\Omega} is not discrete, there are still some measurability conditions in {\omega} on the local and global elements that need to be imposed). However, I am interested in the case when {\Omega,\Omega'} are not standard Borel spaces (in fact, I will take them to be algebraic probability spaces, as defined in this previous post), in which case disintegrations are not available. However, it appears that the stochastic analysis developed in this blog post can serve as a substitute for the tool of disintegration in this context.

We make the remark that if {X|\Omega} is a stochastic set and {E, F} are events that are equivalent up to null events, then one can identify {\Gamma(X|E)} with {\Gamma(X|F)} (through their common restriction to {\Gamma(X|(E \cap F))}, with the restriction maps now being bijections). As such, the notion of a stochastic set does not require the full structure of a concrete probability space {(\Omega, {\mathcal B}, {\mathbf P})}; one could also have defined the notion using only the abstract {\sigma}-algebra consisting of {{\mathcal B}} modulo null events as the base space, or equivalently one could define stochastic sets over the algebraic probability spaces defined in this previous post. However, we will stick with the classical formalism of concrete probability spaces here so as to keep the notation reasonably familiar.

As a corollary of the above observation, we see that if the base space {\Omega} has total measure {0}, then all stochastic sets are trivial (they are just points).

Exercise 2 If {X|\Omega} is a stochastic set, show that there exists an event {\Omega'} with the property that for any event {E}, {\Gamma(X|E)} is non-empty if and only if {E} is contained in {\Omega'} modulo null events. (In particular, {\Omega'} is unique up to null events.) Hint: consider the numbers {\mu( E )} for {E} ranging over all events with {\Gamma(X|E)} non-empty, and form a maximising sequence for these numbers. Then use all three axioms of a stochastic set.

One can now start take many of the fundamental objects, operations, and results in set theory (and, hence, in most other categories of mathematics) and establish analogues relative to a finite measure space. Implicitly, what we will be doing in the next few paragraphs is endowing the category of stochastic sets with the structure of an elementary topos. However, to keep things reasonably concrete, we will not explicitly emphasise the topos-theoretic formalism here, although it is certainly lurking in the background.

Firstly, we define a stochastic function {f: X|\Omega \rightarrow Y|\Omega} between two stochastic sets {X|\Omega, Y|\Omega} to be a collection of maps {f: \Gamma(X|E) \rightarrow \Gamma(Y|E)} for each {E \in {\mathcal B}} which form a natural transformation in the sense that {f(x|E) = f(x)|E} for all {x \in \Gamma(X|F)} and nested events {E \subset F}. In the case when {\Omega} is discrete and at most countable (and after deleting all null points), a stochastic function is nothing more than a collection of functions {f_\omega: X_\omega \rightarrow Y_\omega} for each {\omega \in \Omega}, with the function {f: \Gamma(X|E) \rightarrow \Gamma(Y|E)} then being a direct sum of the factor functions {f_\omega}:

\displaystyle  f( (x_\omega)_{\omega \in E} ) = ( f_\omega(x_\omega) )_{\omega \in E}.

Thus (in the discrete, at most countable setting, at least) stochastic functions do not mix together information from different states {\omega} in a sample space; the value of {f(x)} at {\omega} depends only on the value of {x} at {\omega}. The situation is a bit more subtle for continuous probability spaces, due to the identification of stochastic objects that agree almost surely, nevertheness it is still good intuition to think of stochastic functions as essentially being “pointwise” or “local” in nature.

One can now form the stochastic set {\hbox{Hom}(X \rightarrow Y)|\Omega} of functions from {X|\Omega} to {Y|\Omega}, by setting {\Gamma(\hbox{Hom}(X \rightarrow Y)|E)} for any event {E} to be the set of local stochastic functions {f: X|E \rightarrow Y|E} of the localisations of {X|\Omega, Y|\Omega} to {E}; this is a stochastic set if we use the obvious restriction maps. In the case when {\Omega} is discrete and at most countable, the fibre {\hbox{Hom}(X \rightarrow Y)_\omega} at a point {\omega} of positive measure is simply the set {Y_\omega^{X_\omega}} of functions from {X_\omega} to {Y_\omega}.

In a similar spirit, we say that one stochastic set {Y|\Omega} is a (stochastic) subset of another {X|\Omega}, and write {Y|\Omega \subset X|\Omega}, if we have a stochastic inclusion map, thus {\Gamma(Y|E) \subset \Gamma(X|E)} for all events {E}, with the restriction maps being compatible. We can then define the power set {2^X|\Omega} of a stochastic set {X|\Omega} by setting {\Gamma(2^X|E)} for any event {E} to be the set of all stochastic subsets {Y|E} of {X|E} relative to {E}; it is easy to see that {2^X|\Omega} is a stochastic set with the obvious restriction maps (one can also identify {2^X|\Omega} with {\hbox{Hom}(X, \{\hbox{true},\hbox{false}\})|\Omega} in the obvious fashion). Again, when {\Omega} is discrete and at most countable, the fibre of {2^X|\Omega} at a point {\omega} of positive measure is simply the deterministic power set {2^{X_\omega}}.

Note that if {f: X|\Omega \rightarrow Y|\Omega} is a stochastic function and {Y'|\Omega} is a stochastic subset of {Y|\Omega}, then the inverse image {f^{-1}(Y')|\Omega}, defined by setting {\Gamma(f^{-1}(Y')|E)} for any event {E} to be the set of those {x \in \Gamma(X|E)} with {f(x) \in \Gamma(Y'|E)}, is a stochastic subset of {X|\Omega}. In particular, given a {k}-ary relation {R: X_1 \times \dots \times X_k|\Omega \rightarrow \{\hbox{true}, \hbox{false}\}|\Omega}, the inverse image {R^{-1}( \{ \hbox{true} \}|\Omega )} is a stochastic subset of {X_1 \times \dots \times X_k|\Omega}, which by abuse of notation we denote as

\displaystyle  \{ (x_1,\dots,x_k) \in X_1 \times \dots \times X_k: R(x_1,\dots,x_k) \hbox{ is true} \}|\Omega.

In a similar spirit, if {X'|\Omega} is a stochastic subset of {X|\Omega} and {f: X|\Omega \rightarrow Y|\Omega} is a stochastic function, we can define the image {f(X')|\Omega} by setting {\Gamma(f(X')|E)} to be the set of those {f(x)} with {x \in \Gamma(X'|E)}; one easily verifies that this is a stochastic subset of {Y|\Omega}.

Remark 2 One should caution that in the definition of the subset relation {Y|\Omega \subset X|\Omega}, it is important that {\Gamma(Y|E) \subset \Gamma(X|E)} for all events {E}, not just the global event {\Omega}; in particular, just because a stochastic set {X|\Omega} has no global sections, does not mean that it is contained in the stochastic empty set {\emptyset|\Omega}.

Now we discuss Boolean operations on stochastic subsets of a given stochastic set {X|\Omega}. Given two stochastic subsets {X_1|\Omega, X_2|\Omega} of {X|\Omega}, the stochastic intersection {(X_1 \cap X_2)|\Omega} is defined by setting {\Gamma((X_1 \cap X_2)|E)} to be the set of {x \in \Gamma(X|E)} that lie in both {\Gamma(X_1|E)} and {\Gamma(X_2|E)}:

\displaystyle  \Gamma(X_1 \cap X_2)|E) := \Gamma(X_1|E) \cap \Gamma(X_2|E).

This is easily verified to again be a stochastic subset of {X|\Omega}. More generally one may define stochastic countable intersections {(\bigcap_{n=1}^\infty X_n)|\Omega} for any sequence {X_n|\Omega} of stochastic subsets of {X|\Omega}. One could extend this definition to uncountable families if one wished, but I would advise against it, because some of the usual laws of Boolean algebra (e.g. the de Morgan laws) may break down in this setting.

Stochastic unions are a bit more subtle. The set {\Gamma((X_1 \cup X_2)|E)} should not be defined to simply be the union of {\Gamma(X_1|E)} and {\Gamma(X_2|E)}, as this would not respect the gluing axiom. Instead, we define {\Gamma((X_1 \cup X_2)|E)} to be the set of all {x \in \Gamma(X|E)} such that one can cover {E} by measurable subevents {E_1,E_2} such that {x_i|E_i \in \Gamma(X_i|E_i)} for {i=1,2}; then {(X_1 \cup X_2)|\Omega} may be verified to be a stochastic subset of {X|\Omega}. Thus for instance {\{0,1\}|\Omega} is the stochastic union of {\{0\}|\Omega} and {\{1\}|\Omega}. Similarly for countable unions {(\bigcup_{n=1}^\infty X_n)|\Omega} of stochastic subsets {X_n|\Omega} of {X|\Omega}, although for uncountable unions are extremely problematic (they are disliked by both the measure theory and the countable gluing axiom) and will not be defined here. Finally, the stochastic difference set {\Gamma((X_1 \backslash X_2)|E)} is defined as the set of all {x|E} in {\Gamma(X_1|E)} such that {x|F \not \in \Gamma(X_2|F)} for any subevent {F} of {E} of positive probability. One may verify that in the case when {\Omega} is discrete and at most countable, these Boolean operations correspond to the classical Boolean operations applied separately to each fibre {X_{i,\omega}} of the relevant sets {X_i}. We also leave as an exercise to the reader to verify the usual laws of Boolean arithmetic, e.g. the de Morgan laws, provided that one works with at most countable unions and intersections.

One can also consider a stochastic finite union {(\bigcup_{n=1}^N X_n)|\Omega} in which the number {N} of sets in the union is itself stochastic. More precisely, let {X|\Omega} be a stochastic set, let {N \in {\bf N}|\Omega} be a stochastic natural number, and let {n \mapsto X_n|\Omega} be a stochastic function from the stochastic set {\{ n \in {\bf N}: n \leq N\}|\Omega} (defined by setting {\Gamma(\{n \in {\bf N}: n\leq N\}|E) := \{ n \in {\bf N}|E: n \leq N|E\}})) to the stochastic power set {2^X|\Omega}. Here we are considering {0} to be a natural number, to allow for unions that are possibly empty, with {{\bf N}_+ := {\bf N} \backslash \{0\}} used for the positive natural numbers. We also write {(X_n)_{n=1}^N|\Omega} for the stochastic function {n \mapsto X_n|\Omega}. Then we can define the stochastic union {\bigcup_{n=1}^N X_n|\Omega} by setting {\Gamma(\bigcup_{n=1}^N X_n|E)} for an event {E} to be the set of local elements {x \in \Gamma(X|E)} with the property that there exists a covering of {E} by measurable subevents {E_{n_0}} for {n_0 \in {\bf N}_+}, such that one has {n_0 \leq N|E_{n_0}} and {x|E_{n_0} \in \Gamma(X_{n_0}|E_{n_0})}. One can verify that {\bigcup_{n=1}^N X_n|\Omega} is a stochastic set (with the obvious restriction maps). Again, in the model case when {\Omega} is discrete and at most countable, the fibre {(\bigcup_{n=1}^N X_n)_\omega} is what one would expect it to be, namely {\bigcup_{n=1}^{N(\omega)} (X_n)_\omega}.

The Cartesian product {(X \times Y)|\Omega} of two stochastic sets may be defined by setting {\Gamma((X \times Y)|E) := \Gamma(X|E) \times \Gamma(Y|E)} for all events {E}, with the obvious restriction maps; this is easily seen to be another stochastic set. This lets one define the concept of a {k}-ary operation {f: (X_1 \times \dots \times X_k)|\Omega \rightarrow Y|\Omega} from {k} stochastic sets {X_1,\dots,X_k} to another stochastic set {Y}, or a {k}-ary relation {R: (X_1 \times \dots \times X_k)|\Omega \rightarrow \{\hbox{true}, \hbox{false}\}|\Omega}. In particular, given {x_i \in X_i|\Omega} for {i=1,\dots,k}, the relation {R(x_1,\dots,x_k)} may be deterministically true, deterministically false, or have some other stochastic truth value.

Remark 3 In the degenerate case when {\Omega} is null, stochastic logic becomes a bit weird: all stochastic statements are deterministically true, as are their stochastic negations, since every event in {\Omega} (even the empty set) now holds with full probability. Among other pathologies, the empty set now has a global element over {\Omega} (this is analogous to the notorious convention {0^0=1}), and any two deterministic objects {x,y} become equal over {\Omega}: {x|\Omega=y|\Omega}.

The following simple observation is crucial to subsequent discussion. If {(x_n)_{n \in {\bf N}_+}} is a sequence taking values in the global elements {\Gamma(X|\Omega)} of a stochastic space {X|\Omega}, then we may also define global elements {x_n \in \Gamma(X|\Omega)} for stochastic indices {n \in {\bf N}_+|\Omega} as well, by appealing to the countable gluing axiom to glue together {x_{n_0}} restricted to the set {\{ \omega \in \Omega: n(\omega) = n_0\}} for each deterministic natural number {n_0} to form {x_n}. With this definition, the map {n \mapsto x_n} is a stochastic function from {{\bf N}_+|\Omega} to {X|\Omega}; indeed, this creates a one-to-one correspondence between external sequences (maps {n \mapsto x_n} from {{\bf N}_+} to {\Gamma(X|\Omega)}) and stochastic sequences (stochastic functions {n \mapsto x_n} from {{\bf N}_+|\Omega} to {X|\Omega}). Similarly with {{\bf N}_+} replaced by any other at most countable set. This observation will be important in allowing many deterministic arguments involving sequences will be able to be carried over to the stochastic setting.

We now specialise from the extremely broad discipline of set theory to the more focused discipline of real analysis. There are two fundamental axioms that underlie real analysis (and in particular distinguishes it from real algebra). The first is the Archimedean property, which we phrase in the “no infinitesimal” formulation as follows:

Proposition 2 (Archimedean property) Let {x \in {\bf R}} be such that {x \leq 1/n} for all positive natural numbers {n}. Then {x \leq 0}.

The other is the least upper bound axiom:

Proposition 3 (Least upper bound axiom) Let {S} be a non-empty subset of {{\bf R}} which has an upper bound {M \in {\bf R}}, thus {x \leq M} for all {x \in S}. Then there exists a unique real number {\sup S \in {\bf R}} with the following properties:

  • {x \leq \sup S} for all {x \in S}.
  • For any real {L < \sup S}, there exists {x \in S} such that {L < x \leq \sup S}.
  • {\sup S \leq M}.

Furthermore, {\sup S} does not depend on the choice of {M}.

The Archimedean property extends easily to the stochastic setting:

Proposition 4 (Stochastic Archimedean property) Let {x \in \Gamma({\bf R}|\Omega)} be such that {x \leq 1/n} for all deterministic natural numbers {n}. Then {x \leq 0}.

Remark 4 Here, incidentally, is one place in which this stochastic formalism deviates from the nonstandard analysis formalism, as the latter certainly permits the existence of infinitesimal elements. On the other hand, we caution that stochastic real numbers are permitted to be unbounded, so that formulation of Archimedean property is not valid in the stochastic setting.

The proof is easy and is left to the reader. The least upper bound axiom also extends nicely to the stochastic setting, but the proof requires more work (in particular, our argument uses the monotone convergence theorem):

Theorem 5 (Stochastic least upper bound axiom) Let {S|\Omega} be a stochastic subset of {{\bf R}|\Omega} which has a global upper bound {M \in {\bf R}|\Omega}, thus {x \leq M} for all {x \in \Gamma(S|\Omega)}, and is globally non-empty in the sense that there is at least one global element {x \in \Gamma(S|\Omega)}. Then there exists a unique stochastic real number {\sup S \in \Gamma({\bf R}|\Omega)} with the following properties:

  • {x \leq \sup S} for all {x \in \Gamma(S|\Omega)}.
  • For any stochastic real {L < \sup S}, there exists {x \in \Gamma(S|\Omega)} such that {L < x \leq \sup S}.
  • {\sup S \leq M}.

Furthermore, {\sup S} does not depend on the choice of {M}.

For future reference, we note that the same result holds with {{\bf R}} replaced by {{\bf N} \cup \{+\infty\}} throughout, since the latter may be embedded in the former, for instance by mapping {n} to {1 - \frac{1}{n+1}} and {+\infty} to {1}. In applications, the above theorem serves as a reasonable substitute for the countable axiom of choice, which does not appear to hold in unrestricted generality relative to a measure space; in particular, it can be used to generate various extremising sequences for stochastic functionals on various stochastic function spaces.

Proof: Uniqueness is clear (using the Archimedean property), as well as the independence on {M}, so we turn to existence. By using an order-preserving map from {{\bf R}} to {(-1,1)} (e.g. {x \mapsto \frac{2}{\pi} \hbox{arctan}(x)}) we may assume that {S|\Omega} is a subset of {(-1,1)|\Omega}, and that {M < 1}.

We observe that {\Gamma(S|\Omega)} is a lattice: if {x, y \in \Gamma(S|\Omega)}, then {\max(x,y)} and {\min(x,y)} also lie in {\Gamma(S|\Omega)}. Indeed, {\max(x,y)} may be formed by appealing to the countable gluing axiom to glue {y} (restricted the set {\{ \omega \in \Omega: x(\omega) < y(\omega) \}}) with {x} (restricted to the set {\{ \omega \in \Omega: x(\omega) \geq y(\omega) \}}), and similarly for {\min(x,y)}. (Here we use the fact that relations such as {<} are Borel measurable on {{\bf R}}.)

Let {A \in {\bf R}} denote the deterministic quantity

\displaystyle  A := \sup \{ \int_\Omega x(\omega)\ d\mu(\omega): x \in \Gamma(S|\Omega) \}

then (by Proposition 3!) {A} is well-defined; here we use the hypothesis that {\mu(\Omega)} is finite. Thus we may find a sequence {(x_n)_{n \in {\bf N}}} of elements {x_n} of {\Gamma(S|\Omega)} such that

\displaystyle  \int_\Omega x_n(\omega)\ d\mu(\omega) \rightarrow A \hbox{ as } n \rightarrow \infty. \ \ \ \ \ (1)

Using the lattice property, we may assume that the {x_n} are non-decreasing: {x_n \leq x_m} whenever {n \leq m}. If we then define {\sup S(\omega) := \sup_n x_n(\omega)} (after choosing measurable representatives of each equivalence class {x_n}), then {\sup S} is a stochastic real with {\sup S \leq M}.

If {x \in \Gamma(S|\Omega)}, then {\max(x,x_n) \in \Gamma(S|\Omega)}, and so

\displaystyle  \int_\Omega \max(x,x_n)\ d\mu(\omega) \leq A.

From this and (1) we conclude that

\displaystyle  \int_\Omega \max(x-x_n,0) \rightarrow 0 \hbox{ as } n \rightarrow \infty.

From monotone convergence, we conclude that

\displaystyle  \int_\Omega \max(x-\sup S,0) = 0

and so {x \leq \sup S}, as required.

Now let {L < \sup S} be a stochastic real. After choosing measurable representatives of each relevant equivalence class, we see that for almost every {\omega \in \Omega}, we can find a natural number {n(\omega)} with {x_{n(\omega)} > L}. If we choose {n(\omega)} to be the first such positive natural number when it exists, and (say) {1} otherwise, then {n} is a stochastic positive natural number and {L < x_n}. The claim follows. \Box

Remark 5 One can abstract away the role of the measure {\mu} here, leaving only the ideal of null sets. The property that the measure is finite is then replaced by the more general property that given any non-empty family of measurable sets, there is an at most countable union of sets in that family that is an upper bound modulo null sets for all elements in that faily.

Using Proposition 4 and Theorem 5, one can then revisit many of the other foundational results of deterministic real analysis, and develop stochastic analogues; we give some examples of this below the fold (focusing on the Heine-Borel theorem and a case of the spectral theorem). As an application of this formalism, we revisit some of the Furstenberg-Zimmer structural theory of measure-preserving systems, particularly that of relatively compact and relatively weakly mixing systems, and interpret them in this framework, basically as stochastic versions of compact and weakly mixing systems (though with the caveat that the shift map is allowed to act non-trivially on the underlying probability space). As this formalism is “point-free”, in that it avoids explicit use of fibres and disintegrations, it will be well suited for generalising this structure theory to settings in which the underlying probability spaces are not standard Borel, and the underlying groups are uncountable; I hope to discuss such generalisations in future blog posts.

Remark 6 Roughly speaking, stochastic real analysis can be viewed as a restricted subset of classical real analysis in which all operations have to be “measurable” with respect to the base space. In particular, indiscriminate application of the axiom of choice is not permitted, and one should largely restrict oneself to performing countable unions and intersections rather than arbitrary unions or intersections. Presumably one can formalise this intuition with a suitable “countable transfer principle”, but I was not able to formulate a clean and general principle of this sort, instead verifying various assertions about stochastic objects by hand rather than by direct transfer from the deterministic setting. However, it would be desirable to have such a principle, since otherwise one is faced with the tedious task of redoing all the foundations of real analysis (or whatever other base theory of mathematics one is going to be working in) in the stochastic setting by carefully repeating all the arguments.

More generally, topos theory is a good formalism for capturing precisely the informal idea of performing mathematics with certain operations, such as the axiom of choice, the law of the excluded middle, or arbitrary unions and intersections, being somehow “prohibited” or otherwise “restricted”.

Read the rest of this entry »

As laid out in the foundational work of Kolmogorov, a classical probability space (or probability space for short) is a triplet {(X, {\mathcal X}, \mu)}, where {X} is a set, {{\mathcal X}} is a {\sigma}-algebra of subsets of {X}, and {\mu: {\mathcal X} \rightarrow [0,1]} is a countably additive probability measure on {{\mathcal X}}. Given such a space, one can form a number of interesting function spaces, including

  • the (real) Hilbert space {L^2(X, {\mathcal X}, \mu)} of square-integrable functions {f: X \rightarrow {\bf R}}, modulo {\mu}-almost everywhere equivalence, and with the positive definite inner product {\langle f, g\rangle_{L^2(X, {\mathcal X}, \mu)} := \int_X f g\ d\mu}; and
  • the unital commutative Banach algebra {L^\infty(X, {\mathcal X}, \mu)} of essentially bounded functions {f: X \rightarrow {\bf R}}, modulo {\mu}-almost everywhere equivalence, with {\|f\|_{L^\infty(X, {\mathcal X}, \mu)}} defined as the essential supremum of {|f|}.

There is also a trace {\tau = \tau_\mu: L^\infty(X, {\mathcal X}, \mu) \rightarrow {\bf C}} on {L^\infty} defined by integration: {\tau(f) := \int_X f\ d\mu}.

One can form the category {\mathbf{Prb}} of classical probability spaces, by defining a morphism {\phi: (X, {\mathcal X}, \mu) \rightarrow (Y, {\mathcal Y}, \nu)} between probability spaces to be a function {\phi: X \rightarrow Y} which is measurable (thus {\phi^{-1}(E) \in {\mathcal X}} for all {E \in {\mathcal Y}}) and measure-preserving (thus {\mu(\phi^{-1}(E)) = \nu(E)} for all {E \in {\mathcal Y}}).

Let us now abstract the algebraic features of these spaces as follows; for want of a better name, I will refer to this abstraction as an algebraic probability space, and is very similar to the non-commutative probability spaces studied in this previous post, except that these spaces are now commutative (and real).

Definition 1 An algebraic probability space is a pair {({\mathcal A}, \tau)} where

  • {{\mathcal A}} is a unital commutative real algebra;
  • {\tau: {\mathcal A} \rightarrow {\bf R}} is a homomorphism such that {\tau(1)=1} and {\tau( f^2 ) \geq 0} for all {f \in {\mathcal A}};
  • Every element {f} of {{\mathcal A}} is bounded in the sense that {\sup_{k \geq 1} \tau( f^{2k} )^{1/2k} < \infty}. (Technically, this isn’t an algebraic property, but I need it for technical reasons.)

A morphism {\phi: ({\mathcal A}_1, \tau_1) \rightarrow ({\mathcal A}_2, \tau_2)} is a homomorphism {\phi^*: {\mathcal A}_2 \rightarrow {\mathcal A}_1} which is trace-preserving, in the sense that {\tau_1(\phi^*(f)) = \tau_2(f)} for all {f \in {\mathcal A}_2}.

For want of a better name, I’ll denote the category of algebraic probability spaces as {\mathbf{AlgPrb}}. One can view this category as the opposite category to that of (a subcategory of) the category of tracial commutative real algebras. One could emphasise this opposite nature by denoting the algebraic probability space as {({\mathcal A}, \tau)^{op}} rather than {({\mathcal A},\tau)}; another suggestive (but slightly inaccurate) notation, inspired by the language of schemes, would be {\hbox{Spec}({\mathcal A},\tau)} rather than {({\mathcal A},\tau)}. However, we will not adopt these conventions here, and refer to algebraic probability spaces just by the pair {({\mathcal A},\tau)}.

By the previous discussion, we have a covariant functor {F: \textbf{Prb} \rightarrow \textbf{AlgPrb}} that takes a classical probability space {(X, {\mathcal X}, \mu)} to its algebraic counterpart {(L^\infty(X, {\mathcal X},\mu), \tau_\mu)}, with a morphism {\phi: (X, {\mathcal X}, \mu) \rightarrow (Y, {\mathcal Y}, \nu)} of classical probability spaces mapping to a morphism {F(\phi): (L^\infty(X, {\mathcal X},\mu), \tau_\mu) \rightarrow (L^\infty(Y, {\mathcal Y},\nu), \tau_\nu)} of the corresponding algebraic probability spaces by the formula

\displaystyle  F(\phi)^* f := f \circ \phi

for {f \in L^\infty(Y, {\mathcal Y}, \nu)}. One easily verifies that this is a functor.

In this post I would like to describe a functor {G: \textbf{AlgPrb} \rightarrow \textbf{Prb}} which partially inverts {F} (up to natural isomorphism), that is to say a recipe for starting with an algebraic probability space {({\mathcal A}, \tau)} and producing a classical probability space {(X, {\mathcal X}, \mu)}. This recipe is not new – it is basically the (commutative) Gelfand-Naimark-Segal construction (discussed in this previous post) combined with the Loomis-Sikorski theorem (discussed in this previous post). However, I wanted to put the construction in a single location for sake of reference. I also wanted to make the point that {F} and {G} are not complete inverses; there is a bit of information in the algebraic probability space (e.g. topological information) which is lost when passing back to the classical probability space. In some future posts, I would like to develop some ergodic theory using the algebraic foundations of probability theory rather than the classical foundations; this turns out to be convenient in the ergodic theory arising from nonstandard analysis (such as that described in this previous post), in which the groups involved are uncountable and the underlying spaces are not standard Borel spaces.

Let us describe how to construct the functor {G}, with details postponed to below the fold.

  1. Starting with an algebraic probability space {({\mathcal A}, \tau)}, form an inner product on {{\mathcal A}} by the formula {\langle f, g \rangle := \tau(fg)}, and also form the spectral radius {\rho(f) :=\lim_{k \rightarrow \infty} \tau(f^{2^k})^{1/2^k}}.
  2. The inner product is clearly positive semi-definite. Quotienting out the null vectors and taking completions, we arrive at a real Hilbert space {L^2 = L^2({\mathcal A},\tau)}, to which the trace {\tau} may be extended.
  3. Somewhat less obviously, the spectral radius is well-defined and gives a norm on {{\mathcal A}}. Taking {L^2} limits of sequences in {{\mathcal A}} of bounded spectral radius gives us a subspace {L^\infty = L^\infty({\mathcal A},\tau)} of {L^2} that has the structure of a real commutative Banach algebra.
  4. The idempotents {1_E} of the Banach algebra {L^\infty} may be indexed by elements {E} of an abstract {\sigma}-algebra {{\mathcal B}}.
  5. The Boolean algebra homomorphisms {\delta_x: {\mathcal B} \rightarrow \{0,1\}} (or equivalently, the real algebra homomorphisms {\iota_x: L^\infty \rightarrow {\bf R}}) may be indexed by elements {x} of a space {X}.
  6. Let {{\mathcal X}} denote the {\sigma}-algebra on {X} generated by the basic sets {\overline{E} := \{ x \in X: \delta_x(E) = 1 \}} for every {E \in {\mathcal B}}.
  7. Let {{\mathcal N}} be the {\sigma}-ideal of {{\mathcal X}} generated by the sets {\bigcap_n \overline{E_n}}, where {E_n \in {\mathcal B}} is a sequence with {\bigcap_n E_n = \emptyset}.
  8. One verifies that {{\mathcal B}} is isomorphic to {{\mathcal X}/{\mathcal N}}. Using this isomorphism, the trace {\tau} on {L^\infty} can be used to construct a countably additive measure {\mu} on {{\mathcal X}}. The classical probability space {(X, {\mathcal X}, \mu)} is then {G( {\mathcal A}, \tau )}, and the abstract spaces {L^2, L^\infty} may now be identified with their concrete counterparts {L^2(X, {\mathcal X}, \mu)}, {L^\infty(X, {\mathcal X}, \mu)}.
  9. Every algebraic probability space morphism {\phi: ({\mathcal A}_1,\tau_1) \rightarrow ({\mathcal A}_2,\tau_2)} generates a classical probability morphism {G(\phi): (X_1, {\mathcal X}_1, \mu_1) \rightarrow (X_2, {\mathcal X}_2, \mu_2)} via the formula

    \displaystyle  \delta_{G(\phi)(x_1)}( E_2 ) = \delta_{x_1}( \phi^*(E_2) )

    using a pullback operation {\phi^*} on the abstract {\sigma}-algebras {{\mathcal B}_1, {\mathcal B}_2} that can be defined by density.

Remark 1 The classical probability space {X} constructed by the functor {G} has some additional structure; namely {X} is a {\sigma}-Stone space (a Stone space with the property that the closure of any countable union of clopen sets is clopen), {{\mathcal X}} is the Baire {\sigma}-algebra (generated by the clopen sets), and the null sets are the meager sets. However, we will not use this additional structure here.

The partial inversion relationship between the functors {F: \textbf{Prb} \rightarrow \textbf{AlgPrb}} and {G: \textbf{AlgPrb} \rightarrow \textbf{Prb}} is given by the following assertion:

  1. There is a natural transformation from {F \circ G: \textbf{AlgPrb} \rightarrow \textbf{AlgPrb}} to the identity functor {I: \textbf{AlgPrb} \rightarrow \textbf{AlgPrb}}.

More informally: if one starts with an algebraic probability space {({\mathcal A},\tau)} and converts it back into a classical probability space {(X, {\mathcal X}, \mu)}, then there is a trace-preserving algebra homomorphism of {{\mathcal A}} to {L^\infty( X, {\mathcal X}, \mu )}, which respects morphisms of the algebraic probability space. While this relationship is far weaker than an equivalence of categories (which would require that {F \circ G} and {G \circ F} are both natural isomorphisms), it is still good enough to allow many ergodic theory problems formulated using classical probability spaces to be reformulated instead as an equivalent problem in algebraic probability spaces.

Remark 2 The opposite composition {G \circ F: \textbf{Prb} \rightarrow \textbf{Prb}} is a little odd: it takes an arbitrary probability space {(X, {\mathcal X}, \mu)} and returns a more complicated probability space {(X', {\mathcal X}', \mu')}, with {X'} being the space of homomorphisms {\iota_x: L^\infty(X, {\mathcal X}, \mu) \rightarrow {\bf R}}. while there is “morally” an embedding of {X} into {X'} using the evaluation map, this map does not exist in general because points in {X} may well have zero measure. However, if one takes a “pointless” approach and focuses just on the measure algebras {({\mathcal X}, \mu)}, {({\mathcal X}', \mu')}, then these algebras become naturally isomorphic after quotienting out by null sets.

Remark 3 An algebraic probability space captures a bit more structure than a classical probability space, because {{\mathcal A}} may be identified with a proper subset of {L^\infty} that describes the “regular” functions (or random variables) of the space. For instance, starting with the unit circle {{\bf R}/{\bf Z}} (with the usual Haar measure and the usual trace {\tau(f) = \int_{{\bf R}/{\bf Z}} f}), any unital subalgebra {{\mathcal A}} of {L^\infty({\bf R}/{\bf Z})} that is dense in {L^2({\bf R}/{\bf Z})} will generate the same classical probability space {G( {\mathcal A}, \tau )} on applying the functor {G}, namely one will get the space {({\bf R}/{\bf Z})'} of homomorphisms from {L^\infty({\bf R}/{\bf Z})} to {{\bf R}} (with the measure induced from {\tau}). Thus for instance {{\mathcal A}} could be the continuous functions {C( {\bf R}/{\bf Z} )}, the Wiener algebra {A({\bf R}/{\bf Z})} or the full space {L^\infty({\bf R}/{\bf Z})}, but the classical space {G( {\mathcal A}, \tau )} will be unable to distinguish these spaces from each other. In particular, the functor {F \circ G} loses information (roughly speaking, this functor takes an algebraic probability space and completes it to a von Neumann algebra, but then forgets exactly what algebra was initially used to create this completion). In ergodic theory, this sort of “extra structure” is traditionally encoded in topological terms, by assuming that the underlying probability space {X} has a nice topological structure (e.g. a standard Borel space); however, with the algebraic perspective one has the freedom to have non-topological notions of extra structure, by choosing {{\mathcal A}} to be something other than an algebra {C(X)} of continuous functions on a topological space. I hope to discuss one such example of extra structure (coming from the Gowers-Host-Kra theory of uniformity seminorms) in a later blog post (this generalises the example of the Wiener algebra given previously, which is encoding “Fourier structure”).

A small example of how one could use the functors {F, G} is as follows. Suppose one has a classical probability space {(X, {\mathcal X}, \mu)} with a measure-preserving action of an uncountable group {\Gamma}, which is only defined (and an action) up to almost everywhere equivalence; thus for instance for any set {E} and any {g, h \in \Gamma}, {T^{gh} E} and {T^g T^h E} might not be exactly equal, but only equal up to a null set. For similar reasons, an element {E} of the invariant factor {{\mathcal X}^\Gamma} might not be exactly invariant with respect to {\Gamma}, but instead one only has {T^g E} and {E} equal up to null sets for each {g \in \Gamma}. One might like to “clean up” the action of {\Gamma} to make it defined everywhere, and a genuine action everywhere, but this is not immediately achievable if {\Gamma} is uncountable, since the union of all the null sets where something bad occurs may cease to be a null set. However, by applying the functor {F}, each shift {T^g: X \rightarrow X} defines a morphism {T^g: L^\infty(X, {\mathcal X}, \mu) \rightarrow L^\infty(X, {\mathcal X}, \mu)} on the associated algebraic probability space (i.e. the Koopman operator), and then applying {G}, we obtain a shift {T^g: X' \rightarrow X'} on a new classical probability space {(X', {\mathcal X}', \mu')} which now gives a genuine measure-preserving action of {\Gamma}, and which is equivalent to the original action from a measure algebra standpoint. The invariant factor {{\mathcal X}^\Gamma} now consists of those sets in {{\mathcal X}'} which are genuinely {\Gamma}-invariant, not just up to null sets. (Basically, the classical probability space {(X', {\mathcal X}', \mu')} contains a Boolean algebra {\overline{\mathcal B}} with the property that every measurable set {A \in {\mathcal X}'} is equivalent up to null sets to precisely one set in {\overline{\mathcal B}}, allowing for a canonical “retraction” onto {\overline{\mathcal B}} that eliminates all null set issues.)

More indirectly, the functors {F, G} suggest that one should be able to develop a “pointless” form of ergodic theory, in which the underlying probability spaces are given algebraically rather than classically. I hope to give some more specific examples of this in later posts.

Read the rest of this entry »

Given a function {f: X \rightarrow Y} between two sets {X, Y}, we can form the graph

\displaystyle  \Sigma := \{ (x,f(x)): x\in X \},

which is a subset of the Cartesian product {X \times Y}.

There are a number of “closed graph theorems” in mathematics which relate the regularity properties of the function {f} with the closure properties of the graph {\Sigma}, assuming some “completeness” properties of the domain {X} and range {Y}. The most famous of these is the closed graph theorem from functional analysis, which I phrase as follows:

Theorem 1 (Closed graph theorem (functional analysis)) Let {X, Y} be complete normed vector spaces over the reals (i.e. Banach spaces). Then a function {f: X \rightarrow Y} is a continuous linear transformation if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is both linearly closed (i.e. it is a linear subspace of {X \times Y}) and topologically closed (i.e. closed in the product topology of {X \times Y}).

I like to think of this theorem as linking together qualitative and quantitative notions of regularity preservation properties of an operator {f}; see this blog post for further discussion.

The theorem is equivalent to the assertion that any continuous linear bijection {f: X \rightarrow Y} from one Banach space to another is necessarily an isomorphism in the sense that the inverse map is also continuous and linear. Indeed, to see that this claim implies the closed graph theorem, one applies it to the projection from {\Sigma} to {X}, which is a continuous linear bijection; conversely, to deduce this claim from the closed graph theorem, observe that the graph of the inverse {f^{-1}} is the reflection of the graph of {f}. As such, the closed graph theorem is a corollary of the open mapping theorem, which asserts that any continuous linear surjection from one Banach space to another is open. (Conversely, one can deduce the open mapping theorem from the closed graph theorem by quotienting out the kernel of the continuous surjection to get a bijection.)

It turns out that there is a closed graph theorem (or equivalent reformulations of that theorem, such as an assertion that bijective morphisms between sufficiently “complete” objects are necessarily isomorphisms, or as an open mapping theorem) in many other categories in mathematics as well. Here are some easy ones:

Theorem 2 (Closed graph theorem (linear algebra)) Let {X, Y} be vector spaces over a field {k}. Then a function {f: X \rightarrow Y} is a linear transformation if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is linearly closed.

Theorem 3 (Closed graph theorem (group theory)) Let {X, Y} be groups. Then a function {f: X \rightarrow Y} is a group homomorphism if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is closed under the group operations (i.e. it is a subgroup of {X \times Y}).

Theorem 4 (Closed graph theorem (order theory)) Let {X, Y} be totally ordered sets. Then a function {f: X \rightarrow Y} is monotone increasing if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is totally ordered (using the product order on {X \times Y}).

Remark 1 Similar results to the above three theorems (with similarly easy proofs) hold for other algebraic structures, such as rings (using the usual product of rings), modules, algebras, or Lie algebras, groupoids, or even categories (a map between categories is a functor iff its graph is again a category). (ADDED IN VIEW OF COMMENTS: further examples include affine spaces and {G}-sets (sets with an action of a given group {G}).) There are also various approximate versions of this theorem that are useful in arithmetic combinatorics, that relate the property of a map {f} being an “approximate homomorphism” in some sense with its graph being an “approximate group” in some sense. This is particularly useful for this subfield of mathematics because there are currently more theorems about approximate groups than about approximate homomorphisms, so that one can profitably use closed graph theorems to transfer results about the former to results about the latter.

A slightly more sophisticated result in the same vein:

Theorem 5 (Closed graph theorem (point set topology)) Let {X, Y} be compact Hausdorff spaces. Then a function {f: X \rightarrow Y} is continuous if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is topologically closed.

Indeed, the “only if” direction is easy, while for the “if” direction, note that if {\Sigma} is a closed subset of {X \times Y}, then it is compact Hausdorff, and the projection map from {\Sigma} to {X} is then a bijective continuous map between compact Hausdorff spaces, which is then closed, thus open, and hence a homeomorphism, giving the claim.

Note that the compactness hypothesis is necessary: for instance, the function {f: {\bf R} \rightarrow {\bf R}} defined by {f(x) := 1/x} for {x \neq 0} and {f(0)=0} for {x=0} is a function which has a closed graph, but is discontinuous.

A similar result (but relying on a much deeper theorem) is available in algebraic geometry, as I learned after asking this MathOverflow question:

Theorem 6 (Closed graph theorem (algebraic geometry)) Let {X, Y} be normal projective varieties over an algebraically closed field {k} of characteristic zero. Then a function {f: X \rightarrow Y} is a regular map if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is Zariski-closed.

Proof: (Sketch) For the only if direction, note that the map {x \mapsto (x,f(x))} is a regular map from the projective variety {X} to the projective variety {X \times Y} and is thus a projective morphism, hence is proper. In particular, the image {\Sigma} of {X} under this map is Zariski-closed.

Conversely, if {\Sigma} is Zariski-closed, then it is also a projective variety, and the projection {(x,y) \mapsto x} is a projective morphism from {\Sigma} to {X}, which is clearly quasi-finite; by the characteristic zero hypothesis, it is also separated. Applying (Grothendieck’s form of) Zariski’s main theorem, this projection is the composition of an open immersion and a finite map. As projective varieties are complete, the open immersion is an isomorphism, and so the projection from {\Sigma} to {X} is finite. Being injective and separable, the degree of this finite map must be one, and hence {k(\Sigma)} and {k(X)} are isomorphic, hence (by normality of {X}) {k[\Sigma]} is contained in (the image of) {k[X]}, which makes the map from {X} to {\Sigma} regular, which makes {f} regular. \Box

The counterexample of the map {f: k \rightarrow k} given by {f(x) := 1/x} for {x \neq 0} and {f(0) := 0} demonstrates why the projective hypothesis is necessary. The necessity of the normality condition (or more precisely, a weak normality condition) is demonstrated by (the projective version of) the map {(t^2,t^3) \mapsto t} from the cusipdal curve {\{ (t^2,t^3): t \in k \}} to {k}. (If one restricts attention to smooth varieties, though, normality becomes automatic.) The necessity of characteristic zero is demonstrated by (the projective version of) the inverse of the Frobenius map {x \mapsto x^p} on a field {k} of characteristic {p}.

There are also a number of closed graph theorems for topological groups, of which the following is typical (see Exercise 3 of these previous blog notes):

Theorem 7 (Closed graph theorem (topological group theory)) Let {X, Y} be {\sigma}-compact, locally compact Hausdorff groups. Then a function {X \rightarrow Y} is a continuous homomorphism if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is both group-theoretically closed and topologically closed.

The hypotheses of being {\sigma}-compact, locally compact, and Hausdorff can be relaxed somewhat, but I doubt that they can be eliminated entirely (though I do not have a ready counterexample for this).

In several complex variables, it is a classical theorem (see e.g. Lemma 4 of this blog post) that a holomorphic function from a domain in {{\bf C}^n} to {{\bf C}^n} is locally injective if and only if it is a local diffeomorphism (i.e. its derivative is everywhere non-singular). This leads to a closed graph theorem for complex manifolds:

Theorem 8 (Closed graph theorem (complex manifolds)) Let {X, Y} be complex manifolds. Then a function {f: X \rightarrow Y} is holomorphic if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is a complex manifold (using the complex structure inherited from {X \times Y}) of the same dimension as {X}.

Indeed, one applies the previous observation to the projection from {\Sigma} to {X}. The dimension requirement is needed, as can be seen from the example of the map {f: {\bf C} \rightarrow {\bf C}} defined by {f(z) =1/z} for {z \neq 0} and {f(0)=0}.

(ADDED LATER:) There is a real analogue to the above theorem:

Theorem 9 (Closed graph theorem (real manifolds)) Let {X, Y} be real manifolds. Then a function {f: X \rightarrow Y} is continuous if and only if the graph {\Sigma := \{ (x,f(x)): x \in X \}} is a real manifold of the same dimension as {X}.

This theorem can be proven by applying invariance of domain (discussed in this previous post) to the projection of {\Sigma} to {X}, to show that it is open if {\Sigma} has the same dimension as {X}.

Note though that the analogous claim for smooth real manifolds fails: the function {f: {\bf R} \rightarrow {\bf R}} defined by {f(x) := x^{1/3}} has a smooth graph, but is not itself smooth.

(ADDED YET LATER:) Here is an easy closed graph theorem in the symplectic category:

Theorem 10 (Closed graph theorem (symplectic geometry)) Let {X = (X,\omega_X)} and {Y = (Y,\omega_Y)} be smooth symplectic manifolds of the same dimension. Then a smooth map {f: X \rightarrow Y} is a symplectic morphism (i.e. {f^* \omega_Y = \omega_X}) if and only if the graph {\Sigma := \{(x,f(x)): x \in X \}} is a Lagrangian submanifold of {X \times Y} with the symplectic form {\omega_X \oplus -\omega_Y}.

In view of the symplectic rigidity phenomenon, it is likely that the smoothness hypotheses on {f,X,Y} can be relaxed substantially, but I will not try to formulate such a result here.

There are presumably many further examples of closed graph theorems (or closely related theorems, such as criteria for inverting a morphism, or open mapping type theorems) throughout mathematics; I would be interested to know of further examples.

\Box

In his wonderful article “On proof and progress in mathematics“, Bill Thurston describes (among many other topics) how one’s understanding of given concept in mathematics (such as that of the derivative) can be vastly enriched by viewing it simultaneously from many subtly different perspectives; in the case of the derivative, he gives seven standard such perspectives (infinitesimal, symbolic, logical, geometric, rate, approximation, microscopic) and then mentions a much later perspective in the sequence (as describing a flat connection for a graph).

One can of course do something similar for many other fundamental notions in mathematics. For instance, the notion of a group {G} can be thought of in a number of (closely related) ways, such as the following:

  • (0) Motivating examples: A group is an abstraction of the operations of addition/subtraction or multiplication/division in arithmetic or linear algebra, or of composition/inversion of transformations.
  • (1) Universal algebraic: A group is a set {G} with an identity element {e}, a unary inverse operation {\cdot^{-1}: G \rightarrow G}, and a binary multiplication operation {\cdot: G \times G \rightarrow G} obeying the relations (or axioms) {e \cdot x = x \cdot e = x}, {x \cdot x^{-1} = x^{-1} \cdot x = e}, {(x \cdot y) \cdot z = x \cdot (y \cdot z)} for all {x,y,z \in G}.
  • (2) Symmetric: A group is all the ways in which one can transform a space {V} to itself while preserving some object or structure {O} on this space.
  • (3) Representation theoretic: A group is identifiable with a collection of transformations on a space {V} which is closed under composition and inverse, and contains the identity transformation.
  • (4) Presentation theoretic: A group can be generated by a collection of generators subject to some number of relations.
  • (5) Topological: A group is the fundamental group {\pi_1(X)} of a connected topological space {X}.
  • (6) Dynamic: A group represents the passage of time (or of some other variable(s) of motion or action) on a (reversible) dynamical system.
  • (7) Category theoretic: A group is a category with one object, in which all morphisms have inverses.
  • (8) Quantum: A group is the classical limit {q \rightarrow 0} of a quantum group.
  • etc.

One can view a large part of group theory (and related subjects, such as representation theory) as exploring the interconnections between various of these perspectives. As one’s understanding of the subject matures, many of these formerly distinct perspectives slowly merge into a single unified perspective.

From a recent talk by Ezra Getzler, I learned a more sophisticated perspective on a group, somewhat analogous to Thurston’s example of a sophisticated perspective on a derivative (and coincidentally, flat connections play a central role in both):

  • (37) Sheaf theoretic: A group is identifiable with a (set-valued) sheaf on the category of simplicial complexes such that the morphisms associated to collapses of {d}-simplices are bijective for {d > 1} (and merely surjective for {d \leq 1}).

This interpretation of the group concept is apparently due to Grothendieck, though it is motivated also by homotopy theory. One of the key advantages of this interpretation is that it generalises easily to the notion of an {n}-group (simply by replacing {1} with {n} in (37)), whereas the other interpretations listed earlier require a certain amount of subtlety in order to generalise correctly (in particular, they usually themselves require higher-order notions, such as {n}-categories).

The connection of (37) with any of the other perspectives of a group is elementary, but not immediately obvious; I enjoyed working out exactly what the connection was, and thought it might be of interest to some readers here, so I reproduce it below the fold.

[Note: my reconstruction of Grothendieck’s perspective, and of the appropriate terminology, is likely to be somewhat inaccurate in places: corrections are of course very welcome.]

Read the rest of this entry »

I’ve just uploaded to the arXiv my joint paper with Tim Austin, “On the testability and repair of hereditary hypergraph properties“, which has been submitted to Random Structures and Algorithms. In this paper we prove some positive and negative results for the testability (and the local repairability) of various properties of directed or undirected graphs and hypergraphs, which can be either monochromatic or multicoloured.

The negative results have already been discussed in a previous posting of mine, so today I will focus on the positive results. The property testing results here are finitary results, but it turns out to be rather convenient to use a certain correspondence principle (the hypergraph version of the Furstenberg correspondence principle) to convert the question into one about exchangeable probability measures on spaces of hypergraphs (i.e. on random hypergraphs whose probability distribution is invariant under exchange of vertices). Such objects are also closely related to the”graphons” and “hypergraphons” that emerge as graph limits, as studied by Lovasz-Szegedy, Elek-Szegedy, and others. Somewhat amusingly, once one does so, it then becomes convenient to keep track of objects indexed by vertex sets and how they are exchanged via the language of category theory, and in particular using the concept of a natural transformation to describe such objects as exchangeable measures, graph colourings, and local modification rules. I will try to sketch out some of these connections, after describing the main positive results.

Read the rest of this entry »

Before we begin or study of dynamical systems, topological dynamical systems, and measure-preserving systems (as defined in the previous lecture), it is convenient to give these three classes the structure of a category. One of the basic insights of category theory is that a mathematical objects in a given class (such as dynamical systems) are best studied not in isolation, but in relation to each other, via morphisms. Furthermore, many other basic concepts pertaining to these objects (e.g. subobjects, factors, direct sums, irreducibility, etc.) can be defined in terms of these morphisms. One advantage of taking this perspective here is that it provides a unified way of defining these concepts for the three different categories of dynamical systems, topological dynamical systems, and measure-preserving systems that we will study in this course, thus sparing us the need to give any of our definitions (except for our first one below) in triplicate.

Read the rest of this entry »

Archives