# Recursive Percolation

The following is a rather pretty problem concerning percolation – which was my inspiration for the decoration of this webpage.

Problem 1. Fix $d\ge 2$, and let $p_c=p_c(d)$ be the critical probability of bond percolation on $\mathbb{Z}^d$. Let us fix $p \in (p_c, 1)$, and let $X=(X(e): e \in E(\mathbb{Z}^d))$ be bond-percolation with probability $p$. With probability $1$, there is a unique infinite cluster $\mathcal{C}$ of $X$, which, of course, can be considered as an infinite graph in its own right. What is the critical probability $p_c(\mathcal{C})$?

# Try to Taylor expand this!

It’s a sad fact in analysis, which every undergraduate will see, that not every continuous function is differentiable: consider, for instance, the function $f_1(x)=|x|$ on any interval $I$ containing $0$. Of course, this function can’t be Taylor expanded about $0$, and neither can the functions $f_3(x)=|x|^3, f_5(x)=|x|^5, \cdots$.

What goes wrong here is that these functions are not infinitely differentiable. Even when the function is infinitely differentiable, we can’t always Taylor expand on any interval; a classic example here is $f(x)=\begin{cases} \exp(-1/x^2) & x\neq 0, \\ 0 & x=0\end{cases}$ on the interval $I=(-1,1)$. This is not quite so elementary as the previous examples but, after some thought, one shows that $f$ is infinitely differentiable on the whole interval, and that the $n^\text{th}$ derivative $f^{(n)}(0)=0$, for all $n\ge 1$. As a result, the Taylor approximations $p_n(x)=\sum_{m=0}^n f^{(m)}(0)\frac{x^m}{m!}$ are identically $0$ – which certainly doesn’t converge to the original function $f$.

In a second course on analysis, one sees examples like the Weierstraß function, which are continuous but have no points of differentiability. Still later, one sees that ‘most’ continuous functions are nowhere differentiable, in the sense of the Baire Category Theorem – and in the same spirit, most continuously differentiable functions are not twice differentiable, and so on. The following series of exercises is aimed at students familiar with the machinery of the Baire Category Theorem to conclude a similar result: most infinitely differentiable functions are not given by a power series on any nontrivial interval. It is, of course, possible to give examples directly, but that isn’t nearly as fun!

Problem 1. Consider the space of smooth functions $X=C^\infty([0,1])$ – that is, those functions $f: [0,1]\rightarrow \mathbb{R}$ which can be extended to an infinitely differentiable function on an open interval $I\supset [0,1]$. Define, for $f, g \in X$, the distance $d(f,g)=\sum_{n\ge 0} 2^{-n}\max\left(1, \sup_{x\in [0,1]} \left| f^{(n)}(x)-g^{(n)}(x)\right|\right).$ Show that $d$ is well defined and makes $X$ into a complete metric space, and that a sequence $f_n$ in $X$ converges to $f \in X$ if, and only if, for all $m\ge 0$, the $m^\text{th}$ derivatives $f_n^{(m)}$ converge uniformly to $f^{(m)}$.

As an aside, this is a nice example of a locally convex space which is not a normed space, but which still admits a complete metric. Such spaces are known as Fréchet Spaces.

Next, we look at a criterion which determines whether a smooth function can arise as a power series.

Problem 2. Let $I \subset \mathbb{R}$ be an open interval of the real line, and suppose that $f: I\rightarrow \mathbb{R}$ is given by a convergent power series. Show that, for any closed interval $J\subset I$, there exists $R<\infty$ such that, for all $m\ge 0$, $\sup_J |f^{(m)}| \le m!R^m$.

Conversely, show that if such a bound hold for any closed interval $J\subset I$ (possibly with a different choice of $R$ for each $J$), then $f$ is given by a power series which converges on $I$. Deduce that such a bound cannot hold for the function $f(x)=\begin{cases} \exp(-1/x^2) & x\neq 0 \\ 0 & x=0\end{cases}$  on any interval $I$ containing $0$.

Now we’ve got everything set up, it remains to turn the handle with some familiar Baire Category machinery.

Problem 3. For $a, b \in \mathbb{Q}\cap [0,1], a and $R\in \mathbb{N}$, define

$E_{a,b,R}=\left\{f \in X: \hspace{0.1cm} \text{for all }m\ge 0, \sup_{[a,b]} |f^{(m)}| \le m!R^m\right\}.$

Show that $E_{a,b,R}$ are closed in $X$ with respect to the metric constructed earlier, and have empty interior.

Problem 4.  Conclude that the set $E \subset X$ of functions which are given by a power series on a nontrivial subinterval $I\subset [0,1]$ are meagre in the sense of Baire.

# Asymptotic Density of Subsets.

Sometimes, we want to talk about the ‘size’ of a subset $S$ of $\mathbb{N}$ in a way that’s a bit different from a measure, or indeed cardinality. The following is an elementary but (to my mind) cute series of questions giving the basic properties of this construction, which should be accessible to undergraduates with a basic knowledge of analysis and probability.

Let’s start by considering three sets: the natural numbers $\mathbb{N}=\{0,1,2....\}$, the even numbers $E=\{0,2,4...\}$, and the square numbers $P=\{1,4,9,16...\}$. We’d like to find some mathematical framework that corresponds to the intuition that ‘exactly half’ of $\mathbb{N}$ is in $E$, and that $P$ is, asymptotically, very sparse.

Things that don’t work. As any first year mathematician can tell you, the three sets $\mathbb{N}, E, P$ are in one-to-one correspondence, and all contain exactly as many elements as each other. So this doesn’t work!. We also can’t talk about a uniform distribution on $\mathbb{N}$ either (why?), and so this wouldn’t give us the framework we want.

Asymptotic (Upper) Density. Instead, we use constructions called asymptotic density, and asymptotic upper density.

Problem 1. Let us define, for any subset $S\subset \mathbb{N}$, the asymptotic upper density $\overline{\mu}(S)=\limsup_{N\rightarrow \infty} N^{-1}|S\cap \{1,2,....N\}$, and the lower density $\underline{\mu}(S)=\liminf_{N\rightarrow \infty} N^{-1}|S\cap \{1,2,....N\}|$. Then we can have $S\subset \mathbb{N}$ such that $0\le \underline{\mu}(S) < \overline{\mu}(S)\le 1$, and disjoint subsets $A, B\subset \mathbb{N}$ such that $\overline{\mu}(A\cup B) < \overline{\mu}(A) + \overline{\mu}(B)$.

So these aren’t necessarily the best we can do – we don’t have the nice properties we might ask for. We can do a bit better when we restrict to ‘good’ subsets of $\mathbb{N}$.

Problem 2. Set $\mathcal{F}=\{S\subset \mathcal{N}: \underline{\mu}(S)=\overline{\mu}(S)\}$ be the class of $S \subset \mathbb{N}$ where the upper and lower densities coincide, and for $S \in \mathcal{F}$, set $\mu(S)=\overline{\mu}(S)=\underline{\mu}(S)$. In other words, $\overline{\mu}(S)$ is exactly the limit $\lim_{N\rightarrow \infty} N^{-1}|S\cap \{1,2,....N\}|$, and $\mathcal{F}$ is the class of $S$ where this limit exists. Show that the three sets $\mathbb{N}, E, P$ above belong to $\mathcal{F}$, and that we have $\mu(\mathbb{N})=1, \mu(E)=\frac{1}{2}$ and $\mu(P)=0$.

So this at least gives us a way of formalising the intuition. The next question is to show that we at least mitigate – if not completely avoid the bad behaviour outlined in problem 1.

Problem 3. Show that, if $A, B \in \mathcal{F}$ are disjoint, then $A \cup B \in \mathcal{F}$, and that $\mu(A\cup B)=\mu(A)+\mu(B)$. On the other hand, show that we can have  $A_n \in \mathcal{F}$ disjoint, such that $A=\cup_n A_n$ is not in $\mathcal{F} &bg=ffffff$, and that even if we assume $A\in \mathcal{F}$, then we can have a strict inequality $\mu(A) > \sum_n \mu(A_n)$.

We’ve now shown that we do a bit better by restricting to sets with a single asymptotic density, where the upper and lower bounds coincide.

The reason I spent a few minutes on a Sunday morning thinking about these objects was a question about constructing sets with a prescribed asymptotic density: given $\theta \in [0,1]$, can we find $A_\theta \in \mathcal{F}$ such that $\mu(A_\theta)=\theta$? The answer is yes, and it’s quite simple to give a (tedious!) constructive proof. However, since I’m a probabilist, I like the following minimal-effort (but non-constructive) solution, which does all $\theta \in [0,1]$ simultaneously.

Problem 4. Let $U_n, n\ge 1$ be a sequence of independent random variables, distributed uniformly on $[0,1]$. For $\theta \in [0,1]$, set $A_\theta =\{n\in \mathbb{N}: U_n\le \theta\}$. Show that $\mathbb{P}(A_\theta \in \mathcal{F}, \mu(A_\theta)=\theta, \text{ for all }\theta\in [0,1])=1$.

The fun part here is making the result simultaneous for all $\theta \in [0,1]$, since we have uncountably many events to deal with. It’s probably very straightforward if you have a background in probability, but might be a good exercise if you’re quite new to such ideas!

# Infinitely Many Blue-Eyed Logicians

This is an interesting generalisation of the Blue-Eyed Logicians puzzle. If you are not familiar with the original problem, you may wish to attempt that first!

The problem. Let $\kappa$ be an infinite cardinal. A population of $\kappa$ logicians, of potentially assorted eye colours, living on an eye-land island. Everyone can see everyone else’s eyes, but not their own.

In this society, it is an awful taboo to have blue eyes. Anyone who can deduce that they have blue eyes leaves the island on a midnight ferry, never to return – but because it’s such a taboo, there can be no discussion of eye colour.

Unbeknownst to them, every single one of the logicians has blue eyes. Oh dear….

But because there can be no discussion, no logician can ever deduce this, and they all live in perfect harmony.

One day, an oracle proclaims that ‘There is someone on the island with blue eyes’. Call this day 0.

Question: What happens next? We assume that the logicians are immortal, and allow times to run over $t\in\text{ON}$ – that is, all ordinal-valued times. So after days $1, 2, ... n, ...$, there’s a day $\omega$. This is followed by days $\omega+1, \omega+2, ...\omega+ n, ...$, and then $2\omega$, etc.

I (believe that I) have a solution, which will be posted here in due course.

DH

# Space-Filling Curves in Infinite Dimensions

“Space filling curves shouldn’t exist in infinite dimensions. That’s just rude” – Ben.

Bizarrely enough, they do! But only in a certain sense, as we will see below. Throughout, we equip $\mathbb{R}$ with the usual (Euclidean) topology.

Problem 1. Let $X$ be an infinite dimensional Banach space. Show that there is no continuous surjective map $f: \mathbb{R} \rightarrow (X, \| \|)$.

Problem 2. Let $1, and consider the sequence space $l^p=L^p(\mathbb{N})$. Show that there exists a continuous surjection $f: \mathbb{R} \rightarrow (l^p, w)$.

Just in case that wasn’t counterintuitive enough for you, we could also show:

Problem 2′. Show that there exists a continuous surjection $f: \mathbb{R} \rightarrow (L^p(\mathbb{R}),w)$.

So there are space-filling curves on function spaces! This is very conter-intuitive, since $L^p(\mathbb{R})$ naïvely seems to be much bigger than $\mathbb{R}$.

Problem 3. What happens when we replace $l^p$ by $(l^\infty, w)$? What about $(l^\infty, w^*)$, or $(l^1, w)$?

Once you’ve done these, it’s instructive to see what general results you can get out of this.

# Redecorating

I have a wallpaper! I wanted something decorative, but also unobtrusive, to put as a wallpaper here – but preferably something that would fit with the mathematical theme of the site.

The wallpaper is bond percolation on a square grid, with $p=0.4$. This is below the critical point of two dimensions, $p_c=0.5$, so we don’t expect to see very big clusters emerging.

I simulated this in mathematica – I’ll put up the code, so that you can experiment with different probabilities, at some point in the near future. I may also write a generally accessible blog post about percolation to explain what’s going on!

Edit: Here’s the code!:

n = 50;
Edges = {};
Vertexes = Flatten[Table[i + n*j, {i, 1, n}, {j, 1, n}], 1];
Do[U = RandomReal[];
If[U > 0.6, AppendTo[Edges, i + (n)*j <-> i + 1 + (n)*j],];
V = RandomReal[];
If[V > 0.6, AppendTo[Edges, i + (n)*j <-> i + (n)*(j + 1)],], {i, 1,
n – 1}, {j, 1, n – 1}];
Result = Graph[Vertexes, Edges,
VertexCoordinates ->
Flatten[Table[{i, j}, {i, 1, n}, {j, 1, n}], 1]]

This should be run in mathematica. Since not everyone has access to this, I will try to build an interactive version that I can put up here.

# Overkill? Me?

It’s an elementary fact of linear analysis that a normed space being finite dimensional is equivalent to its unit ball being compact under the norm topology. The forwards implication here is fairly clean, and boils down to repeated applications of Bolzano-Weierstrass. The converse is also elementary, but a bit ugly.

This is a nice application I found of the theory of weak topologies, which exports all of the work to the general theorem of topology, and makes a ‘cleaner’ proof of the converse implication.

Lemma Suppose $(B_X, \|\hspace{0.1cm} \|)$ is compact. Then $X$ is finite dimensional.

Proof. Equip the unit ball with the weak topology $w$, and consider the identity map  $\iota: (B_X, \|\hspace{0.1cm}\|)\rightarrow (B_X, w)$.

$\iota$ is clearly a bijection, and since the weak topology is coarser than the norm topology, $\iota$ is continuous. Moreover, since the weak topology is always Hausdorff, and we assumed that $(B_X, \|\hspace{0.1cm}\|)$ is compact, $\iota$ is a continuous bijection from a compact space into a Hausdorff space, and so is a homeomorphism. Hence

$(B_X, \|\hspace{0.1cm}\|)=(B_X,w).$

In particular, the half-ball $U= \frac{1}{2}B_X^o$ can be written as $V\cap B_X$, for some weakly open set $V\subset X$.

Suppose, for a contradiction, that $\text{dim } X=\infty.$ Since $V$ is a weakly open neighbourhood of the origin, it contains a finite co-dimensional subspace $Y$. Since $X$ is infinite-dimensional, and $Y$ has finite codimension, $Y$ cannot be the trivial subspace: $Y \neq 0$, and so contains a point $y\in Y$ of norm $\|y\|=\frac{3}{4}$. But this is absurd, since $V\cap B_X = \{x\in X: \|x\|<\frac{1}{2}\}$. This gives us a contradiction, and so we can conclude that $\text{dim } X<\infty$.