It is always dangerous to venture an opinion as to why a problem is hard (cf. Clarke’s first law), but I’m going to stick my neck out on this one, because (a) it seems that there has been a lot of effort expended on this problem recently, sometimes perhaps without full awareness of the main difficulties, and (b) I would love to be proved wrong on this opinion :-) .

The global regularity problem for Navier-Stokes is of course a Clay Millennium Prize problem and it would be redundant to describe it again here. I will note, however, that it asks for existence of global smooth solutions to a Cauchy problem for a nonlinear PDE. There are countless other global regularity results of this type for many (but certainly not all) other nonlinear PDE; for instance, global regularity is known for Navier-Stokes in two spatial dimensions rather than three (this result essentially dates all the way back to Leray’s thesis in 1933!). Why is the three-dimensional Navier-Stokes global regularity problem considered so hard, when global regularity for so many other equations is easy, or at least achievable?

(For this post, I am only considering the global regularity problem for Navier-Stokes, from a purely mathematical viewpoint, and in the precise formulation given by the Clay Institute; I will not discuss at all the question as to what implications a rigorous solution (either positive or negative) to this problem would have for physics, computational fluid dynamics, or other disciplines, as these are beyond my area of expertise. But if anyone qualified in these fields wants to make a comment along these lines, by all means do so.)

The standard response to this question is turbulence – the behaviour of three-dimensional Navier-Stokes equations at fine scales is much more nonlinear (and hence unstable) than at coarse scales. I would phrase the obstruction slightly differently, as supercriticality. Or more precisely, all of the globally controlled quantities for Navier-Stokes evolution which we are aware of (and we are not aware of very many) are either supercritical with respect to scaling, which means that they are much weaker at controlling fine-scale behaviour than controlling coarse-scale behaviour, or they are non-coercive, which means that they do not really control the solution at all, either at coarse scales or at fine. (I’ll define these terms more precisely later.) At present, all known methods for obtaining global smooth solutions to a (deterministic) nonlinear PDE Cauchy problem require either

  1. Exact and explicit solutions (or at least an exact, explicit transformation to a significantly simpler PDE or ODE);
  2. Perturbative hypotheses (e.g. small data, data close to a special solution, or more generally a hypothesis which involves an \epsilon somewhere); or
  3. One or more globally controlled quantities (such as the total energy) which are both coercive and either critical or subcritical.

(Note that the presence of (1), (2), or (3) are currently necessary conditions for a global regularity result, but far from sufficient; otherwise, papers on the global regularity problem for various nonlinear PDEs would be substantially shorter :-) . In particular, there have been many good, deep, and highly non-trivial papers recently on global regularity for Navier-Stokes, but they all assume either (1), (2) or (3) via additional hypotheses on the data or solution. For instance, in recent years we have seen good results on global regularity assuming (2), as well as good results on global regularity assuming (3); a complete bibilography of recent results is unfortunately too lengthy to be given here.)

The Navier-Stokes global regularity problem for arbitrary large smooth data lacks all of these three ingredients. Reinstating (2) is impossible without changing the statement of the problem, or adding some additional hypotheses; also, in perturbative situations the Navier-Stokes equation evolves almost linearly, while in the non-perturbative setting it behaves very nonlinearly, so there is basically no chance of a reduction of the non-perturbative case to the perturbative one unless one comes up with a highly nonlinear transform to achieve this (e.g. a naive scaling argument cannot possibly work). Thus, one is left with only three possible strategies if one wants to solve the full problem:

  1. Solve the Navier-Stokes equation exactly and explicitly (or at least transform this equation exactly and explicitly to a simpler equation);
  2. Discover a new globally controlled quantity which is both coercive and either critical or subcritical; or
  3. Discover a new method which yields global smooth solutions even in the absence of the ingredients (1), (2), and (3) above.

For the rest of this post I refer to these strategies as “Strategy 1”, “Strategy 2”, and “Strategy 3”.

Much effort has been expended here, especially on Strategy 3, but the supercriticality of the equation presents a truly significant obstacle which already defeats all known methods. Strategy 1 is probably hopeless; the last century of experience has shown that (with the very notable exception of completely integrable systems, of which the Navier-Stokes equations is not an example) most nonlinear PDE, even those arising from physics, do not enjoy explicit formulae for solutions from arbitrary data (although it may well be the case that there are interesting exact solutions from special (e.g. symmetric) data). Strategy 2 may have a little more hope; after all, the PoincarĂ© conjecture became solvable (though still very far from trivial) after Perelman introduced a new globally controlled quantity for Ricci flow (the Perelman entropy) which turned out to be both coercive and critical. (See also my exposition of this topic.) But we are still not very good at discovering new globally controlled quantities; to quote Klainerman, “the discovery of any new bound, stronger than that provided by the energy, for general solutions of any of our basic physical equations would have the significance of a major event” (emphasis mine).

I will return to Strategy 2 later, but let us now discuss Strategy 3. The first basic observation is that the Navier-Stokes equation, like many other of our basic model equations, obeys a scale invariance: specifically, given any scaling parameter \lambda > 0, and any smooth velocity field u: [0, T) \times {\Bbb R}^3 \to {\Bbb R}^3 solving the Navier-Stokes equations for some time T, one can form a new velocity field u^{(\lambda)}: [0, \lambda^2 T) \times {\Bbb R}^3 \to {\Bbb R}^3 to the Navier-Stokes equation up to time \lambda^2 T, by the formula

u^{(\lambda)}(t,x) := \frac{1}{\lambda} u( \frac{t}{\lambda^2}, \frac{x}{\lambda} )

(Strictly speaking, this scaling invariance is only present as stated in the absence of an external force, and with the non-periodic domain {\Bbb R}^3 rather than the periodic domain {\Bbb T}^3. One can adapt the arguments here to these other settings with some minor effort, the key point being that an approximate scale invariance can play the role of a perfect scale invariance in the considerations below. The pressure field p(t,x) gets rescaled too, to p^{(\lambda)}(t,x) := \frac{1}{\lambda^2} p( \frac{t}{\lambda^2}, \frac{x}{\lambda} ), but we will not need to study the pressure here. The viscosity \nu remains unchanged.)

We shall think of the rescaling parameter \lambda as being large (e.g. \lambda > 1). One should then think of the transformation from u to u^{(\lambda)} as a kind of “magnifying glass”, taking fine-scale behaviour of u and matching it with an identical (but rescaled, and slowed down) coarse-scale behaviour of u^{(\lambda)}. The point of this magnifying glass is that it allows us to treat both fine-scale and coarse-scale behaviour on an equal footing, by identifying both types of behaviour with something that goes on at a fixed scale (e.g. the unit scale). Observe that the scaling suggests that fine-scale behaviour should play out on much smaller time scales than coarse-scale behaviour (T versus \lambda^2 T). Thus, for instance, if a unit-scale solution does something funny at time 1, then the rescaled fine-scale solution will exhibit something similarly funny at spatial scales 1/\lambda and at time 1/\lambda^2. Blowup can occur when the solution shifts its energy into increasingly finer and finer scales, thus evolving more and more rapidly and eventually reaching a singularity in which the scale in both space and time on which the bulk of the evolution is occuring has shrunk to zero. In order to prevent blowup, therefore, we must arrest this motion of energy from coarse scales (or low frequencies) to fine scales (or high frequencies). (There are many ways in which to make these statements rigorous, for instance using Littlewood-Paley theory, which we will not discuss here, preferring instead to leave terms such as “coarse-scale” and “fine-scale” undefined.)

Now, let us take an arbitrary large-data smooth solution to Navier-Stokes, and let it evolve over a very long period of time [0,T), assuming that it stays smooth except possibly at time T. At very late times of the evolution, such as those near to the final time T, there is no reason to expect the solution to resemble the initial data any more (except in perturbative regimes, but these are not available in the arbitrary large-data case). Indeed, the only control we are likely to have on the late-time stages of the solution are those provided by globally controlled quantities of the evolution. Barring a breakthrough in Strategy 2, we only have two really useful globally controlled (i.e. bounded even for very large T) quantities:

  • The maximum kinetic energy \sup_{0 \leq t < T} \frac{1}{2} \int_{{\Bbb R}^3} |u(t,x)|^2\ dx; and
  • The cumulative energy dissipation \frac{1}{2} \int_0^T \int_{{\Bbb R}^3} |\nabla u(t,x)|^2\ dx dt.

Indeed, the energy conservation law implies that these quantities are both bounded by the initial kinetic energy E, which could be large (we are assuming our data could be large) but is at least finite by hypothesis.

The above two quantities are coercive, in the sense that control of these quantities imply that the solution, even at very late times, stays in a bounded region of some function space. However, this is basically the only thing we know about the solution at late times (other than that it is smooth until time T, but this is a qualitative assumption and gives no bounds). So, unless there is a breakthrough in Strategy 2, we cannot rule out the worst-case scenario that the solution near time T is essentially an arbitrary smooth divergence-free vector field which is bounded both in kinetic energy and in cumulative energy dissipation by E. In particular, near time T the solution could be concentrating the bulk of its energy into fine-scale behaviour, say at some spatial scale 1/\lambda. (Of course, cumulative energy dissipation is not a function of a single time, but is an integral over all time; let me suppress this fact for the sake of the current discussion.)

Now, let us take our magnifying glass and blow up this fine-scale behaviour by \lambda to create a coarse-scale solution to Navier-Stokes. Given that the fine-scale solution could (in the worst-case scenario) be as bad as an arbitrary smooth vector field with kinetic energy and cumulative energy dissipation at most E, the rescaled unit-scale solution can be as bad as an arbitrary smooth vector field with kinetic energy and cumulative energy dissipation at most E \lambda, as a simple change-of-variables shows. Note that the control given by our two key quantities has worsened by a factor of \lambda; because of this worsening, we say that these quantities are supercritical – they become increasingly useless for controlling the solution as one moves to finer and finer scales. This should be contrasted with critical quantities (such as the energy for two-dimensional Navier-Stokes), which are invariant under scaling and thus control all scales equally well (or equally poorly), and subcritical quantities, control of which becomes increasingly powerful at fine scales (and increasingly useless at very coarse scales).

Now, suppose we know of examples of unit-scale solutions whose kinetic energy and cumulative energy dissipation are as large as E \lambda, but which can shift their energy to the next finer scale, e.g. a half-unit scale, in a bounded amount O(1) of time. Given the previous discussion, we cannot rule out the possibility that our rescaled solution behaves like this example. Undoing the scaling, this means that we cannot rule out the possibility that the original solution will shift its energy from spatial scale 1/\lambda to spatial scale 1/2\lambda in time O(1/\lambda^2). If this bad scenario repeates over and over again, then convergence of geometric series shows that the solution may in fact blow up in finite time. Note that the bad scenarios do not have to happen immediately after each other (the self-similar blowup scenario); the solution could shift from scale 1/\lambda to 1/2\lambda, wait for a little bit (in rescaled time) to “mix up” the system and return to an “arbitrary” (and thus potentially “worst-case”) state, and then shift to 1/4\lambda, and so forth. While the cumulative energy dissipation bound can provide a little bit of a bound on how long the system can “wait” in such a “holding pattern”, it is far too weak to prevent blowup in finite time. To put it another way, we have no rigorous, deterministic way of preventing Maxwell’s demon from plaguing the solution at increasingly frequent (in absolute time) intervals, invoking various rescalings of the above scenario to nudge the energy of the solution into increasingly finer scales, until blowup is attained.

Thus, in order for Strategy 3 to be successful, we basically need to rule out the scenario in which unit-scale solutions with arbitrarily large kinetic energy and cumulative energy dissipation shift their energy to the next highest scale. But every single analytic technique we are aware of (except for those involving exact solutions, i.e. Strategy 1) requires at least one bound on the size of solution in order to have any chance at all. Basically, one needs at least one bound in order to control all nonlinear errors – and any strategy we know of which does not proceed via exact solutions will have at least one nonlinear error that needs to be controlled. The only thing we have here is a bound on the scale of the solution, which is not a bound in the sense that a norm of the solution is bounded; and so we are stuck.

To summarise, any argument which claims to yield global regularity for Navier-Stokes via Strategy 3 must inevitably (via the scale invariance) provide a radically new method for providing non-trivial control of nonlinear unit-scale solutions of arbitrary large size for unit time, which looks impossible without new breakthroughs on Strategy 1 or Strategy 2. (There are a couple of loopholes that one might try to exploit: one can instead try to refine the control on the “waiting time” or “amount of mixing” between each shift to the next finer scale, or try to exploit the fact that each such shift requires a certain amount of energy dissipation, but one can use similar scaling arguments to the preceding to show that these types of loopholes cannot be exploited without a new bound along the lines of Strategy 2, or some sort of argument which works for arbitrarily large data at unit scales.)

To rephrase in an even more jargon-heavy manner: the “energy surface” on which the dynamics is known to live in, can be quotiented by the scale invariance. After this quotienting, the solution can stray arbitrarily far from the origin even at unit scales, and so we lose all control of the solution unless we have exact control (Strategy 1) or can significantly shrink the energy surface (Strategy 2).

The above was a general critique of Strategy 3. Now I’ll turn to some known specific attempts to implement Strategy 3, and discuss where the difficulty lies with these:

  1. Using weaker or approximate notions of solution (e.g. viscosity solutions, penalised solutions, super- or sub- solutions, etc.). This type of approach dates all the way back to Leray. It has long been known that by weakening the nonlinear portion of Navier-Stokes (e.g. taming the nonlinearity), or strengthening the linear portion (e.g. introducing hyperdissipation), or by performing a discretisation or regularisation of spatial scales, or by relaxing the notion of a “solution”, one can get global solutions to approximate Navier-Stokes equations. The hope is then to take limits and recover a smooth solution, as opposed to a mere global weak solution, which was already constructed by Leray for Navier-Stokes all the way back in 1933. But in order to ensure the limit is smooth, we need convergence in a strong topology. In fact, the same type of scaling arguments used before basically require that we obtain convergence in either a critical or subcritical topology. Absent a breakthrough in Strategy 2, the only type of convergences we have are in very rough – in particular, in supercritical – topologies. Attempting to upgrade such convergence to critical or subcritical topologies is the qualitative analogue of the quantitative problems discussed earlier, and ultimately faces the same problem (albeit in very different language) of trying to control unit-scale solutions of arbitrarily large size. Working in a purely qualitative setting (using limits, etc.) instead of a quantitative one (using estimates, etc.) can disguise these problems (and, unfortunately, can lead to errors if limits are manipulated carelessly), but the qualitative formalism does not magically make these problems disappear. Note that weak solutions are already known to be badly behaved for the closely related Euler equation. More generally, by recasting the problem in a sufficiently abstract formalism (e.g. formal limits of near-solutions), there are a number of ways to create an abstract object which could be considered as a kind of generalised solution, but the moment one tries to establish actual control on the regularity of this generalised solution one will encounter all the supercriticality difficulties mentioned earlier.
  2. Iterative methods (e.g. contraction mapping principle, Nash-Moser iteration, power series, etc.) in a function space. These methods are perturbative, and require something to be small: either the data has to be small, the nonlinearity has to be small, or the time of existence desired has to be small. These methods are excellent for constructing local solutions for large data, or global solutions for small data, but cannot handle global solutions for large data (running into the same problems as any other Strategy 3 approach). These approaches are also typically rather insensitive to the specific structure of the equation, which is already a major warning sign since one can easily construct (rather artificial) systems similar to Navier-Stokes for which blowup is known to occur. The optimal perturbative result is probably very close to that established by Koch-Tataru, for reasons discussed in that paper.
  3. Exploiting blowup criteria. Perturbative theory can yield some highly non-trivial blowup criteria – that certain norms of the solution must diverge if the solution is to blow up. For instance, a celebrated result of Beale-Kato-Majda shows that the maximal vorticity must have a divergent time integral at the blowup point. However, all such blowup criteria are subcritical or critical in nature, and thus, barring a breakthrough in Strategy 2, the known globally controlled quantities cannot be used to reach a contradiction. Scaling arguments similar to those given above show that perturbative methods cannot achieve a supercritical blowup criterion.
  4. Asymptotic analysis of the blowup point(s). Another proposal is to rescale the solution near a blowup point and take some sort of limit, and then continue the analysis until a contradiction ensues. This type of approach is useful in many other contexts (for instance, in understanding Ricci flow). However, in order to actually extract a useful limit (in particular, one which still solves Navier-Stokes in a strong sense, and does collapse to the trivial solution), one needs to uniformly control all rescalings of the solution – or in other words, one needs a breakthrough in Strategy 2. Another major difficulty with this approach is that blowup can occur not just at one point, but can conceivably blow up on a one-dimensional set; this is another manifestation of supercriticality.
  5. Analysis of a minimal blowup solution. This is a strategy, initiated by Bourgain, which has recently been very successful in establishing large data global regularity for a variety of equations with a critical conserved quantity, namely to assume for contradiction that a blowup solution exists, and then extract a minimal blowup solution which minimises the conserved quantity. This strategy (which basically pushes the perturbative theory to its natural limit) seems set to become the standard method for dealing with large data critical equations. It has the appealing feature that there is enough compactness (or almost periodicity) in the minimal blowup solution (once one quotients out by the scaling symmetry) that one can begin to use subcritical and supercritical conservation laws and monotonicity formulae as well (see my survey on this topic). Unfortunately, as the strategy is currently understood, it does not seem to be directly applicable to a supercritical situation (unless one simply assumes that some critical norm is globally bounded) because it is impossible, in view of the scale invariance, to minimise a non-scale-invariant quantity.
  6. Abstract approaches (avoiding the use of properties specific to the Navier-Stokes equation). At its best, abstraction can efficiently organise and capture the key difficulties of a problem, placing the problem in a framework which allows for a direct and natural resolution of these difficulties without being distracted by irrelevant concrete details. (Kato’s semigroup method is a good example of this in nonlinear PDE; regrettably for this discussion, it is limited to subcritical situations.) At its worst, abstraction conceals the difficulty within some subtle notation or concept (e.g. in various types of convergence to a limit), thus incurring the risk that the difficulty is “magically” avoided by an inconspicuous error in the abstract manipulations. An abstract approach which manages to breezily ignore the supercritical nature of the problem thus looks very suspicious. More substantively, there are many equations which enjoy a coercive conservation law yet still can exhibit finite time blowup (e.g. the mass-critical focusing NLS equation); an abstract approach thus would have to exploit some subtle feature of Navier-Stokes which is not present in all the examples in which blowup is known to be possible. Such a feature is unlikely to be discovered abstractly before it is first discovered concretely; the field of PDE has proven to be the type of mathematics where progress generally starts in the concrete and then flows to the abstract, rather than vice versa.

If we abandon Strategy 1 and Strategy 3, we are thus left with Strategy 2 – discovering new bounds, stronger than those provided by the (supercritical) energy. This is not a priori impossible, but there is a huge gap between simply wishing for a new bound and actually discovering and then rigorously establishing one. Simply sticking in the existing energy bounds into the Navier-Stokes equation and seeing what comes out will provide a few more bounds, but they will all be supercritical, as a scaling argument quickly reveals. The only other way we know of to create global non-perturbative deterministic bounds is to discover a new conserved or monotone quantity. In the past, when such quantities have been discovered, they have always been connected either to geometry (symplectic, Riemmanian, complex, etc.), to physics, or to some consistently favourable (defocusing) sign in the nonlinearity (or in various “curvatures” in the system). There appears to be very little usable geometry in the equation; on the one hand, the Euclidean structure enters the equation via the diffusive term \Delta and by the divergence-free nature of the vector field, but the nonlinearity is instead describing transport by the velocity vector field, which is basically just an arbitrary volume-preserving infinitesimal diffeomorphism (and in particular does not respect the Euclidean structure at all). One can try to quotient out by this diffeomorphism (i.e. work in material coordinates) but there are very few geometric invariants left to play with when one does so. (In the case of the Euler equations, the vorticity vector field is preserved modulo this diffeomorphism, as observed for instance by Li, but this invariant is very far from coercive, being almost purely topological in nature.) The Navier-Stokes equation, being a system rather than a scalar equation, also appears to have almost no favourable sign properties, in particular ruling out the type of bounds which the maximum principle or similar comparison principles can give. This leaves physics, but apart from the energy, it is not clear if there are any physical quantities of fluids which are deterministically monotone. (Things look better on the stochastic level, in which the laws of thermodynamics might play a role, but the Navier-Stokes problem, as defined by the Clay institute, is deterministic, and so we have Maxwell’s demon to contend with.) It would of course be fantastic to obtain a fourth source of non-perturbative controlled quantities, not arising from geometry, physics, or favourable signs, but this looks somewhat of a long shot at present. Indeed given the turbulent, unstable, and chaotic nature of Navier-Stokes, it is quite conceivable that in fact no reasonable globally controlled quantities exist beyond that which arise from the energy.

Of course, given how hard it is to show global regularity, one might try instead to establish finite time blowup instead (this also is acceptable for the Millennium prize). Unfortunately, even though the Navier-Stokes equation is known to be very unstable, it is not clear at all how to pass from this to a rigorous demonstration of a blowup solution. All the rigorous finite time blowup results (as opposed to mere instability results) that I am aware of rely on one or more of the following ingredients:

  1. Exact blowup solutions (or at least an exact transformation to a significantly simpler PDE or ODE, for which blowup can be established);
  2. An ansatz for a blowup solution (or approximate solution), combined with some nonlinear stability theory for that ansatz;
  3. A comparison principle argument, dominating the solution by another object which blows up in finite time, taking the solution with it; or
  4. An indirect argument, constructing a functional of the solution which must attain an impossible value in finite time (e.g. a quantity which is manifestly non-negative for smooth solutions, but must become negative in finite time).

It may well be that there is some exotic symmetry reduction which gives (1), but no-one has located any good exactly solvable special case of Navier-Stokes (in fact, those which have been found, are known to have global smooth solutions). (2) is problematic for two reasons: firstly, we do not have a good ansatz for a blowup solution, but perhaps more importantly it seems hopeless to establish a stability theory for any such ansatz thus created, as this problem is essentially a more difficult version of the global regularity problem, and in particular subject to the main difficulty, namely controlling the highly nonlinear behaviour at fine scales. (One of the ironies in pursuing method (2) is that in order to establish rigorous blowup in some sense, one must first establish rigorous stability in some other (renormalised) sense.) Method (3) would require a comparison principle, which as noted before appears to be absent for the non-scalar Navier-Stokes equations. Method (4) suffers from the same problem, ultimately coming back to the “Strategy 2” problem that we have virtually no globally monotone quantities in this system to play with (other than energy monotonicity, which clearly looks insufficient by itself). Obtaining a new type of mechanism to force blowup other than (1)-(4) above would be quite revolutionary, not just for Navier-Stokes; but I am unaware of even any proposals in these directions, though perhaps topological methods might have some effectiveness.

So, after all this negativity, do I have any positive suggestions for how to solve this problem? My opinion is that Strategy 1 is impossible, and Strategy 2 would require either some exceptionally good intuition from physics, or else an incredible stroke of luck. Which leaves Strategy 3 (and indeed, I think one of the main reasons why the Navier-Stokes problem is interesting is that it forces us to create a Strategy 3 technique). Given how difficult this strategy seems to be, as discussed above, I only have some extremely tentative and speculative thoughts in this direction, all of which I would classify as “blue-sky” long shots:

  1. Work with ensembles of data, rather than a single initial datum. All of our current theory for deterministic evolution equations deals only with a single solution from a single initial datum. It may be more effective to work with parameterised familes of data and solutions, or perhaps probability measures (e.g. Gibbs measures or other invariant measures). One obvious partial result to shoot for is to try to establish global regularity for generic large data rather than all large data; in other words, acknowledge that Maxwell’s demon might exist, but show that the probability of it actually intervening is very small. The problem is that we have virtually no tools for dealing with generic (average-case) data other than by treating all (worst-case) data; the enemy is that the Navier-Stokes flow itself might have some perverse entropy-reducing property which somehow makes the average case drift towards (or at least recur near) the worst case over long periods of time. This is incredibly unlikely to be the truth, but we have no tools to prevent it from happening at present.
  2. Work with a much simpler (but still supercritical) toy model. The Navier-Stokes model is parabolic, which is nice, but is complicated in many other ways, being relatively high-dimensional and also non-scalar in nature. It may make sense to work with other, simplified models which still contain the key difficulty that the only globally controlled quantities are supercritical. Examples include the Katz-Pavlovic dyadic model for the Euler equations (for which blowup can be demonstrated by a monotonicity argument; see this survey for more details), or the spherically symmetric defocusing supercritical nonlinear wave equation.
  3. Develop non-perturbative tools to control deterministic non-integrable dynamical systems. Throughout this post we have been discussing PDEs, but actually there are similar issues arising in the nominally simpler context of finite-dimensional dynamical systems (ODEs). Except in perturbative contexts (such as the neighbourhood of a fixed point or invariant torus), the long-time evolution of a dynamical system for deterministic data is still largely only controllable by the classical tools of exact solutions, conservation laws and monotonicity formulae; a discovery of a new and effective tool for this purpose would be a major breakthrough. One natural place to start is to better understand the long-time, non-perturbative dynamics of the classical three-body problem, for which there are still fundamental unsolved questions.
  4. Establish really good bounds for critical or nearly-critical problems. Recently, I showed that having a very good bound for a critical equation essentially implies that one also has a global regularity result for a slightly supercritical equation. The idea is to use a monotonicity formula which does weaken very slightly as one passes to finer and finer scales, but such that each such passage to a finer scale costs a significant amount of monotonicity; since there is only a bounded amount of monotonicity to go around, it turns out that the latter effect just barely manages to overcome the former in my equation to recover global regularity (though by doing so, the bounds worsen from polynomial in the critical case to double exponential in my logarithmically supercritical case). I severely doubt that my method can push to non-logarithmically supercritical equations, but it does illustrate that having very strong bounds at the critical level may lead to some modest progress on the problem.
  5. Try a topological method. This is a special case of (1). It may well be that a primarily topological argument may be used either to construct solutions, or to establish blowup; there are some precedents for this type of construction in elliptic theory. Such methods are very global by nature, and thus not restricted to perturbative or nearly-linear regimes. However, there is no obvious topology here (except possibly for that generated by the vortex filaments) and as far as I know, there is not even a “proof-of-concept” version of this idea for any evolution equation. So this is really more of a wish than any sort of concrete strategy.
  6. Understand pseudorandomness. This is an incredibly vague statement; but part of the difficulty with this problem, which also exists in one form or another in many other famous problems (e.g. Riemann hypothesis, P=BPP, P \neq NP, twin prime and Goldbach conjectures, normality of digits of \pi, Collatz conjecture, etc.) is that we expect any sufficiently complex (but deterministic) dynamical system to behave “chaotically” or “pseudorandomly”, but we still have very few tools for actually making this intuition precise, especially if one is considering deterministic initial data rather than generic data. Understanding pseudorandomness in other contexts, even dramatically different ones, may indirectly shed some insight on the turbulent behaviour of Navier-Stokes.

In conclusion, while it is good to occasionally have a crack at impossible problems, just to try one’s luck, I would personally spend much more of my time on other, more tractable PDE problems than the Clay prize problem, though one should certainly keep that problem in mind if, in the course on working on other problems, one indeed does stumble upon something that smells like a breakthrough in Strategy 1, 2, or 3 above. (In particular, there are many other serious and interesting questions in fluid equations that are not anywhere near as difficult as global regularity for Navier-Stokes, but still highly worthwhile to resolve.)