A Bit of Geometry

Claim: $H^1(\mathbb C^*,\mathbb Z)=\mathbb Z$.

Proof: Let\begin{align*}U_1&=\left\{z\in\mathbb C^*:\Re\left(z\right)>0\right\},\\U_2&=\left\{z\in\mathbb C^*:\Re\left(z\right)<0\right\},\\U_3&=\left\{z\in\mathbb C^*:\Im\left(z\right)>0\right\},\\U_4&=\left\{z\in\mathbb C^*:\Im\left(z\right)<0\right\}\text{, and}\end{align*}$\mathcal{U}=\left\{U_1,U_2,U_3,U_4\right\}$. Then $\mathcal{U}$ is a locally-finite open cover of $\mathbb C^*$. Let $\mathcal{V}\subseteq\mathcal{U}$ Then\begin{equation*}H^q\left(\bigcap_{V\in\mathcal{V}}V,\mathbb Z\right)=0\end{equation*}for all positive integers $q$ since $\bigcap_{V\in\mathcal{V}}V$ is empty or contractible. Therefore, by Leray's theorem, it is the case that $H^1\left(\mathcal{U},\mathbb Z\right)=H^1\left(\mathbb C^*,\mathbb Z\right)$. Let $I=\left\{1,2,3,4\right\}$ and observe that\begin{align*}C^0\left(\mathcal{U},\mathbb Z\right)&=\prod_{\substack{\alpha\in I\\\vphantom{\alpha}}}\mathbb Z\left(U_\alpha\right)=\mathbb Z^4,\\C^1\left(\mathcal{U},\mathbb Z\right)&=\prod_{\substack{\alpha,\beta\in I\\\alpha<\beta}}\mathbb Z\left(U_\alpha\cap U_\beta\right)=\mathbb Z^4\text{, and}\\C^2\left(\mathcal{U},\mathbb Z\right)&=\prod_{\substack{\alpha,\beta,\gamma\in I\\\alpha<\beta<\gamma}}\mathbb Z\left(U_\alpha\cap U_\beta\cap U_\gamma\right)=0.\end{align*}Therefore, the coboundary map $\delta_0:C^0\left(\mathcal{U},\mathbb Z\right)\to C^1\left(\mathcal{U},\mathbb Z\right)$, which is defined by $\sigma\mapsto\delta_0\sigma$ such that\begin{equation*}\left(\delta_0\sigma\right)_{\left(\alpha,\beta\right)}=\sigma_\beta|_{U_\alpha\cap U_\beta}-\sigma_\alpha|_{U_\alpha\cap U_\beta}\text{, where }\left(\alpha,\beta\right)\in I^2\text{ such that }\alpha<\beta,\end{equation*}yields that $\left(a,b,c,d\right)\mapsto\left(c-a,d-a,c-b,d-b\right)$, where $a,b,c,d\in\mathbb Z$. Observe that $d-b=\left(d-a\right)-\left(c-a\right)+\left(c-b\right)$. Therefore, $\text{im }\delta_0=\mathbb Z^3$, which implies that\begin{equation*}H^1\left(\mathbb C^*,\mathbb Z\right)=H^1\left(\mathcal{U},\mathbb Z\right)=\frac{\ker\delta_1}{\text{im }\delta_0}=\frac{\mathbb Z^4}{\mathbb Z^3}=\mathbb Z,\end{equation*}where $\delta_1:C^1\left(\mathcal{U},\mathbb Z\right)\to C^2\left(\mathcal{U},\mathbb Z\right)$ is the zero homomorphism. $\blacksquare$

"Elementary Geometry"

This entry defines what my professors literally refer to as "elementary geometry". I am beginning to suspect that anything that is not cutting-edge mathematics is deemed elementary. It is either that or these concepts become elementary after constant perusal.

Let $D\subseteq\mathbb C^n$ be open. Then $f:D\to\mathbb C$ is holomorphic if it has a local power series representation at each point of $D$.

Holomorphic functions are uniquely determined by their behavior on open sets: if $f$ and $g$ are holomorphic on a domain $D$ (where a domain is a set that is open and connected), then $f=g$ on $D$. This is because the largest open subset of $D$ on which $f=g$ is also closed (relative to $D$) since the partial derivatives which determine the power series expansion are continuous. Therefore, since $D$ is connected, it must be the case that this set is $D$.

Say that $f$ is holomorphic at $z\in\mathbb C^n$ if it is holomorphic on some neighborhood of $z$. Let $A_z$ be the collection of functions that are holomorphic at $z$. Then $A_z$ is an algebra over $\mathbb C$ in which addition and multiplication are defined pointwise such that if $f:U\to\mathbb C$ and $g:V\to\mathbb C$, then $f+g:U\cap V\to\mathbb C$ and $fg:U\cap V\to\mathbb C$.

Let $I_z$ be the ideal in $A_z$ consisting of the functions of $A_z$ that vanish on some neighborhood of $z$.

The algebra of germs of holomorphic functions at $z$ is defined to be the quotient algebra $O_z:=A_z/I_z$. Thus a germ of a holomorphic function is an element $f+I_z$ of $O_z$, where $f$ is holomorphic at $z$. Denote this germ by $[f]_z$. We identify functions which belong to the same germ due to the uniqueness property mentioned above.

Define the stalk space (espace étalé) of germs of holomorphic functions to be $S=\{(z,[f]_z):f$ is holomorphic at $z\in\mathbb C^n\}$ together with $\rho:S\to\mathbb C^n$ defined by $(z,[f]_z)\mapsto z$. Call $\rho^{-1}(z)$ the stalk at $z$ (this is a copy of $O_z$). Define the stalk space to be the disjoint union of these stalks.

Let $\{U_\alpha\}$ be an open cover of $\mathbb C^n$ and define $V(f_\alpha,U_\alpha):=\{(z,[f_\alpha]_z):z\in U_\alpha\}$, where $f_\alpha$ is holomorphic on $U_\alpha$. Then $\{V(f_\alpha,U_\alpha)\}$ is a basis for a topology on $S$. Moreover, relative to this topology, $\rho$ is a local homeomorphism.

This topological space together with $\rho$ is called the sheaf of germs of holomorphic functions over the base space $\mathbb C^n$.

It turns out that $S$ is Hausdorff. In fact, $S$ is an analytic manifold that contains a few surprising properties.

Carathéodory's Theorem

I like this theorem because it allows us to construct measures from outer measures—complete measures, in fact.

Claim (Carathéodory's Theorem): If $\mu^*$ is an outer measure on a nonempty set $X$, then the collection $\mathcal M$ of $\mu^*$-measurable subsets of $X$ is a $\sigma$-algebra, and $\mu^*|_{\mathcal M}$ is a complete measure.

Proof: idk how 2 explain it.. i just feel it in me, u know what im saying? $\blacksquare$

I'm just kidding. The way people usually prove things does not constitute a valid proof. lol

Proof: Observe that if $E\in\mathcal M$, then $E^c\in\mathcal M$ since $\mu^*$-measurability is symmetric in $E$ and $E^c$.

Let $A,B\in\mathcal M$ and let $E\subseteq X$. Then$$\begin{align}\mu^*\left(E\right)&=\mu^*\left(E\cap A\right)+\mu^*\left(E\cap A^c\right)\\&=\mu^*\left(E\cap A\cap B\right)+\mu^*\left(E\cap A\cap B^c\right)+\mu^*\left(E\cap A^c\cap B\right)+\mu^*\left(E\cap A^c\cap B^c\right).\end{align}$$Since $A\cup B=(A\cap B)\cup(A\cap B^c)\cup(B\cap A^c)$, it is the case that$$\mu^*\left(E\cap\left(A\cup B\right)\right)\leq\mu^*\left(E\cap A\cap B\right)+\mu^*\left(E\cap A\cap B^c\right)+\mu^*\left(E\cap A^c\cap B\right),$$which implies that $\mu^*(E)\geq\mu^*(E\cap(A\cup B))+\mu^*(E\cap(A\cup B)^c)$. Therefore, $\mathcal M$ is an algebra.

Observe that if $A,B\in\mathcal M$ and $A\cap B=\varnothing$, then$$\mu^*\left(A\cup B\right)=\mu^*\left(\left(A\cup B\right)\cap A\right)+\mu^*\left(\left(A\cup B\right)\cap A^c\right)=\mu^*\left(A\right)+\mu^*\left(B\right).$$Therefore, $\mu^*$ is finitely additive on $\mathcal M$.

Let $\{A_j\}_1^\infty$ be a disjoint collection of elements of $\mathcal M$, let $B_n=\bigcup_1^n A_j$, let $B=\bigcup_1^\infty A_j$, and let $E\subseteq X$. Then$$\mu^*\left(E\cap B_n\right)=\mu^*\left(E\cap B_n\cap A_n\right)+\mu^*\left(E\cap B_n\cap A_n^c\right)=\mu^*\left(E\cap A_n\right)+\mu^*\left(E\cap B_{n-1}\right).$$By induction, it is the case that $\mu^*(E\cap B_n)=\sum_1^n\mu^*(E\cap A_j)$, which implies that$$\mu^*\left(E\right)=\mu^*\left(E\cap B_n\right)+\mu^*\left(E\cap B_n^c\right)\geq\sum_1^n\mu^*\left(E\cap A_j\right)+\mu^*\left(E\cap B^c\right)\geq\mu^*\left(\bigcup_1^n E\cap A_j\right)+\mu^*\left(E\cap B^c\right).$$Letting $n$ tend to infinity yields that $\mu^*(E)\geq\mu^*(E\cap B)+\mu^*(E\cap B^c)$. Therefore, $\mathcal M$ is a $\sigma$-algebra.

Observe that if $E=B$, then $\mu^*(B)=\sum_1^\infty\mu^*(A_j)$. Therefore, $\mu^*$ is countably additive on $\mathcal M$, which implies that $\mu^*|_{\mathcal M}$ is a measure.

Let $A,E\subseteq X$ such that $\mu^*(A)=0$. Then$$\mu^*(E)\leq\mu^*(E\cap A)+\mu^*(E\cap A^c)=\mu^*(E\cap A^c),$$which implies that $A\in\mathcal M$. Therefore, $\mu^*|_{\mathcal M}$ is complete. $\blacksquare$

Post Prelims: New Beginning, Pt. 1

Here's fresh, new math—stuff I've never posted about. It's a bit of measure theory—an indispensable generalization of the notion of "length", "area", "volume", etc., which will later on allow us to extend the Riemann integral to the Lebesgue integral (and thus take our calculus to the "next level", if I may). Real-world applications of these concepts are plentiful in quantum mechanics. In fact, it feels like all the math I do now has applications only in similar fields at around that level, which makes it somewhat difficult to motivate. But someone has to do it, right?

Claim. Let $\left\{X_j\right\}_{j=1}^\infty$ be a collection of sets, let $\mathcal E_j\subseteq\mathcal P\left(X_j\right)$, let $\mathcal M_j$ be the $\sigma$-algebra on $X_j$ generated by $\mathcal E_j$, and let $X_j\in\mathcal E_j$. Then $\bigotimes_{j=1}^\infty\mathcal M_j$ is generated by$$\mathcal F=\left\{\prod_{j=1}^\infty E_j:E_j\in\mathcal E_j\right\}.$$Proof. Recall that $\bigotimes_{j=1}^\infty\mathcal M_j$ is generated by$$\mathcal G=\left\{\pi_j^{-1}\left(E_j\right):j\in J\text{ and }E_j\in\mathcal E_j\right\},$$where $\pi_k:\prod_{j=1}^\infty X_j\to X_k$ is defined canonically. An element of $\mathcal F$ has the form $\prod_{j=1}^\infty E_j=E_1\times E_2\times\cdots,$ where $E_j\in\mathcal E_j$. Since $\pi_k^{-1}\left(E_k\right)=X_1\times X_2\times\cdots\times E_k\times\cdots\in\mathcal G$, it is the case that$$\prod_{j=1}^\infty E_j=E_1\times E_2\times\cdots=\bigcap_{j=1}^\infty\pi_j^{-1}\left(E_j\right)\in\mathcal M\left(\mathcal G\right).$$Hence, $\mathcal M\left(\mathcal F\right)\subseteq\mathcal M\left(\mathcal G\right).$ Conversely, an element of $\mathcal G$ has the form $\pi_k^{-1}\left(E_k\right)=X_1\times X_2\times\cdots\times E_k\times\cdots$, where $E_k\in\mathcal E_k$. Since $X_j\in\mathcal E_j$, it is the case that $\pi_k^{-1}\left(E_k\right)\in\mathcal F$. Hence, $\mathcal M\left(\mathcal G\right)\subseteq\mathcal M\left(\mathcal F\right)$. $\blacksquare$

Note that it is crucial that $\left\{X_j\right\}_{j=1}^\infty$ is a countable collection and that $X_j\in\mathcal E_j$ for $j=1,2,\dots$.

Later on, I will introduce the notion of a measure, define Borel $\sigma$-algebras, and do some computations on $\mathbb R$.

Thoughts and Some Abstract Algebra

This year has been a ridiculously laborious one for me: I enrolled in classes fairly beyond my comfort zone, and these seized the opportunity to savagely rip me to shreds. On top of that, I was cast into the shark-infested waters of tutoring, grading, and reciting: one thing is to be a math Jedi, and another is to successfully convey the subject. I have perennially worked arduously in my life, but let me tell you that I have never worked this hard. I used to brag about studying for twelve to fourteen hours a day every once in a while; this is now my routine. My summer exclusively boiled down to preparing for my prelims, and I still feel nervous about them. The amount of knowledge that I have had to digest in the past two months has been absurd. Paradoxically, it looks like I often spend a torrent of strenuous effort writing long and complicated blog entries, but this is actually my way of relaxing, of taking a break, of quickly letting off stream. I may sound like a  masochist, but I have simply assimilated one of Muhammad Ali's quotes: "I hated every minute of training, but I said, 'Don't quit. Suffer now and live the rest of your life as a champion.'"

Enough of my petty grievances. I will now tackle some abstract algebra questions that I found interesting from my book:

Claim: If $G$ is a finite group and $x\in G$ of odd order $n$, then there is a positive integer $k$ such that $x^{2k}=x$.

Proof: $x^n=e\implies x^nx=x^{n+1}=x^{2k}=x$. $\blacksquare$

Claim: If $G$ is a finite group, $H\leq G$, $N\trianglelefteq G$, and $\gcd\left(\left|H\right|,\left[G:N\right]\right)=1$, then $H\leq N$.

Proof: Let $\varphi:G\to G/N$ be such that $g\mapsto gN$. Then $\varphi$ is a homomorphism, which implies that $\varphi\left(H\right)\leq G/N$, which in turn implies that $\left|\varphi\left(H\right)\right|\mid\left[G:N\right]$. Moreover, $\left.\varphi\right|_H$ is a homomorphism, which implies that $H/\ker\left.\varphi\right|_H\cong\varphi\left(H\right)$, which in turn implies that $\left|\varphi\left(H\right)\right|\mid\left|H\right|$. Therefore, $\left|\varphi\left(H\right)\right|=1$, which implies that $\varphi\left(H\right)=\left\{N\right\}$, which in turn implies that $H\subseteq N$. $\blacksquare$

Theorem (Sylow's): If $G$ is a group of order $p^\alpha m$, where $p$ is prime, $\alpha$ is a positive integer, and $p\not\mid m$, then
  1. there is a subgroup of order $p^\alpha$;
  2. if $P$ is a subgroup of order $p^\alpha$ and $Q$ is a $p$-subgroup, then there is a $g\in G$ such that $Q\leq gPg^{-1}$; and
  3. $n_p\equiv1\pmod p$ and $n_p\mid m$, where $n_p$ is the number of subgroups of order $p^\alpha$.
Claim: A group of order $30=2\cdot3\cdot5$ has a normal subgroup.

Proof: Observe that $n_2\in\left\{1,3,5,15\right\}$, $n_3\in\left\{1,10\right\}$, and $n_5\in\left\{1,6\right\}$. If $n_2=1$, $n_3=1$, or $n_5=1$, then we are done. Therefore, let $n_2=3$, $n_3=10$, and $n_5=6$. However, this implies that there are $3$ non-identity elements of order $2$, $20$ non-identity elements of order $3$, and $24$ non-identity elements of order $5$ for a total of $48$ elements, which is a contradiction. Therefore, the group has a normal subgroup. $\blacksquare$

The following is a list of unique representatives of each isomorphism class of abelian groups of order $216=2^33^3$.

$$\begin{align*}\mathbb Z_{8}\times\mathbb Z_{27}&\cong\mathbb Z_{216}\\\mathbb Z_{8}\times\mathbb Z_{9}\times\mathbb Z_{3}&\cong\mathbb Z_{72}\times\mathbb Z_{3}\\\mathbb Z_{8}\times\mathbb Z_{3}\times\mathbb Z_{3}\times\mathbb Z_{3}&\cong\mathbb Z_{24}\times\mathbb Z_{3}\times\mathbb Z_{3}\\\mathbb Z_{4}\times\mathbb Z_{2}\times\mathbb Z_{27}&\cong\mathbb Z_{108}\times\mathbb Z_{2}\\\mathbb Z_{4}\times\mathbb Z_{2}\times\mathbb Z_{9}\times\mathbb Z_{3}&\cong\mathbb Z_{36}\times\mathbb Z_{6}\\\mathbb Z_{4}\times\mathbb Z_{2}\times\mathbb Z_{3}\times\mathbb Z_{3}\times\mathbb Z_{3}&\cong\mathbb Z_{12}\times\mathbb Z_{6}\times\mathbb Z_{3}\\\mathbb Z_{2}\times\mathbb Z_{2}\times\mathbb Z_{2}\times\mathbb Z_{27}&\cong\mathbb Z_{54}\times\mathbb Z_{2}\times\mathbb Z_{2}\\\mathbb Z_{2}\times\mathbb Z_{2}\times\mathbb Z_{2}\times\mathbb Z_{9}\times\mathbb Z_{3}&\cong\mathbb Z_{18}\times\mathbb Z_{6}\times\mathbb Z_{2}\\\mathbb Z_{2}\times\mathbb Z_{2}\times\mathbb Z_{2}\times\mathbb Z_{3}\times\mathbb Z_{3}\times\mathbb Z_{3}&\cong\mathbb Z_{6}\times\mathbb Z_{6}\times\mathbb Z_{6}\end{align*}$$

Claim: If $p$ is an odd prime and $G$ is a group of order $2p$, then $G$ is a semi-direct product of the form $\mathbb Z_p\rtimes\mathbb Z_2$.

Proof: Let $h\in G$ of order $p$, let $k\in G$ of order $2$, let $H=\langle h\rangle$, and let $K=\langle k\rangle$. Then $H\cap K=\left\{e\right\}$, $G=HK$, and $\left[G:H\right]=2$, which implies that $H$ is normal. Therefore, $G=H\rtimes K\cong\mathbb Z_p\rtimes\mathbb Z_2$. $\blacksquare$

Claim: If $H$ and $K$ are groups and $\varphi:K\to\text{Aut}\left(H\right)$ is a homomorphism, then the identity map between $H\rtimes_{\varphi}K$ and $H\times K$ is an isomorphism if and only if $\varphi$ is the constant homomorphism.

Proof: Let $\left(h_1,k_1\right),\left(h_2,k_2\right)\in H\rtimes_{\varphi}K$ and suppose that the identity map between $H\rtimes_{\varphi}K$ and $H\times K$ is an isomorphism. Then

$$\left(h_1\varphi\left(k_1\right)\left(h_2\right),k_1k_2\right)=\left(h_1,k_1\right)\left(h_2,k_2\right)=\left(h_1h_2,k_1k_2\right),\tag*{$\left(\star\right)$}$$

which implies that $\varphi$ is the constant homomorphism.

Conversely, suppose that $\varphi$ is the constant homomorphism. Then $\left(\star\right)$ holds, which implies that the identity map between $H\rtimes_{\varphi}K$ and $H\times K$ is an isomorphism. $\blacksquare$

Claim: Boolean rings are commutative.

Proof: Let $x$ and $y$ be two elements of a Boolean ring. Then

$$x+y=\left(x+y\right)^2=x+xy+yx+y\implies0=xy+yx\implies xy=yx.\tag*{$\blacksquare$}$$

A Bit of Pre-Forensic Science

I've been sharing fairly serious math, so I'll tone it down in this blog entry by going over an intriguing problem my differential equations students had to tackle on Friday.

The Problem

The FBI hired you as a forensics scientist. Not long after, at midnight, a body with a temperature of 31°C was found in the woods. One hour after that, the temperature of the body had dropped to 29°C. It was noted that the ambient temperature remained at 21°C.

If the temperature of the body is modeled by Newton's law of cooling and if the normal temperature of a live body is 36.7°C, then when was the person killed?

The Solution

Let $x\left(t\right)$ be the temperature (in °C) of the body as a function of time (in hours). Then

$$x'\left(t\right)=k\left(x\left(t\right)-21\right),$$

where $k$ is a constant of proportionality. Let's solve this differential equation:

$$\begin{align*}x'\left(t\right)=kx\left(t\right)-21k&\implies\left[e^{-kt}x\left(t\right)\right]'=-21ke^{-kt}\\&\implies e^{-kt}x\left(t\right)=21e^{-kt}+C\\&\implies x\left(t\right)=21+Ce^{kt}.\end{align*}$$

Let $t=0$ represent midnight. Then $31=x\left(0\right)=21+C$, which implies $C=10$. Moreover, $29=x\left(1\right)=21+10e^{k}$, which implies $e^{k}=4/5$. Therefore,

$$x\left(t\right)=21+10\left(\frac{4}{5}\right)^t.$$

Setting this equal to $36.7$ and solving for $t$ yields $t\approx-2.02$, which implies the person was killed at around 10:00 PM.

Remarks
  • I skipped a gazillion steps.
  • In the real world:
    • the temperature of a dead body may not be modeled by Newton's law of cooling (what if the victim wore several layers of clothing?),
    • the ambient temperature may not be constant (think Texas), and
    • not everyone's normal body temperature is 36.7°C (what if the person ran a lot before dying?).
  • Nevertheless, educated guesses like these prove useful more often than not.

Metrizable + Lindelöf = Second Countable

Studying for my prelims has been a revealing experience: half a year ago, I proved this claim in an extraordinarily naïve way thinking that it was clever. Although the proof below is my most recent attempt, which is significantly conciser, history may repeat itself. This discipline has humbled me tremendously.

Claim: If $X$ is a metrizable, Lindelöf, topological space, then $X$ is second countable.

Proof: Let $d$ be a metric that induces the topology of $X$, and let $\mathcal B_n=\left\{B_d\left(x,1/n\right):x\in X\right\}$, where $n\in\mathbb N$. Then $\mathcal B_n$ is an open cover of $X$ and thus has a countable sub-cover $\mathcal V_n$. Let $\mathcal B=\bigcup_{n=1}^\infty\mathcal V_n$, let $x\in B_d\left(y_1,1/m_1\right)\cap B_d\left(y_2,1/m_2\right)$ for some $B_d\left(y_1,1/m_1\right),B_d\left(y_2,1/m_2\right)\in\mathcal B$, and let $m_3\in\mathbb N$ such that$$\frac1m_3\leqslant\frac12\min\left\{\frac1{m_1}-d\left(x,y_1\right),\frac1{m_2}-d\left(x,y_2\right)\right\}.$$ Then there is a $B_d\left(y_3,1/m_3\right)\in\mathcal B$ containing $x$. Let $z\in B_d\left(y_3,1/m_3\right)$. Then$$\begin{align*}d\left(z,y_1\right)&\leqslant d\left(z,y_3\right)+d\left(y_3,y_1\right)\\&\leqslant d\left(z,y_3\right)+d\left(y_3,x\right)+d\left(x,y_1\right)\\&<\frac1{m_3}+\frac1{m_3}+d\left(x,y_1\right)=\frac2{m_3}+d\left(x,y_1\right)\\&\leqslant\frac1{m_1}-d\left(x,y_1\right)+d\left(x,y_1\right)=\frac1{m_1},\end{align*}$$which implies that $z\in B_d\left(y_1,1/m_1\right)$, which in turn implies that $B_d\left(y_3,1/m_3\right)\subseteq B_d\left(y_1,1/m_1\right)$. A similar argument shows that $B_d\left(y_3,1/m_3\right)\subseteq B_d\left(y_2,1/m_2\right)$. $\blacksquare$

Heartbeat

I have not written in a while, but this blog is still alive, so I will quickly prove two of my favorite (among many) results of topology. These stem straight from my preparation for the coming preliminary examinations.

School all of a sudden snatched most of my time.

Let $X$ be a topological space.

Claim: $X$ is Hausdorff if and only if $\Delta:=\left\{\left(x,x\right):x\in X\right\}$ is closed.

Proof: Suppose that $X$ is Hausdorff, and let $\left(x_1,x_2\right)\in X\times X\setminus\Delta$. Then $x_1\neq x_2$, which implies that there exist disjoint neighborhoods $U$ and $V$ of $x_1$ and $x_2$, respectively, which implies that $U\times V\subseteq X\times X\setminus\Delta$, which implies that $X\times X\setminus\Delta$ is open, which implies that $\Delta$ is closed.

Suppose that $\Delta$ is closed, and let $x_1,x_2\in X$ such that $x_1\neq x_2$. Then $\left(x_1,x_2\right)\in X\times X\setminus\Delta$, which is open, which implies that there exists a neighborhood $U\times V$ of $\left(x_1,x_2\right)$ contained in $X\times X\setminus\Delta$, which implies that there exist disjoint neighborhoods $U$ and $V$ of $x_1$ and $x_2$, respectively, which implies that $X$ is Hausdorff. $\blacksquare$

Let $X$ and $Y$ be topological spaces, let $Y$ be Hausdorff, let $A\subseteq X$, let $f,g:\overline A\to Y$ be continuous, and let $f\left(x\right)=g\left(x\right)$ for every $x\in A$.

Claim: $f\left(x\right)=g\left(x\right)$ for every $x\in\overline A$.

Proof: Define $h:\overline A\to Y\times Y$ by $x\mapsto\left(f\left(x\right),g\left(x\right)\right)$. Then $h$ is continuous and $h\left(A\right)\subseteq\Delta$, which implies that $h(\overline A)\subseteq\overline{h\left(A\right)}\subseteq\overline\Delta$. Since $Y$ is Hausdorff, $\Delta$ is closed, which implies that $h(\overline A)\subseteq\Delta$, which implies that $f\left(x\right)=g\left(x\right)$ for every $x\in\overline A$. $\blacksquare$

I like these results because the first characterizes Hausdorff spaces in simple terms, and the second shows that no shenanigans happen at the continuous images in Hausdorff spaces of limit points.

Remark: It is implicit in the first claim that the topology on $X\times X$ is the product one, which is generated by the set of products of open subsets of $X$. Let $\left(x_1,x_2\right)\in X\times X$, and let $N$ be a neighborhood of $\left(x_1,x_2\right)$. Then
$$N=\bigcup_{\alpha\in I}U_\alpha\times V_\alpha,$$
where $U_\alpha$ and $V_\alpha$ are open, and $I$ is some index set. Therefore, there exists a $\beta\in I$ such that $\left(x_1,x_2\right)\in U_\beta\times V_\beta$.

Simple Chaotic System

Define a function $d:\left\{0,1\right\}^\mathbb{N}\times\left\{0,1\right\}^\mathbb{N}\to\left[0,\infty\right)$ as follows:

If $x=\left(x_1,x_2,\dots\right)$ and $y=\left(y_1,y_2,\dots\right)$, then$$d\left(x,y\right)=\frac{1}{2^n}\text{, where }n=\min\left\{i\in\mathbb{N}:x_i\neq y_i\right\},$$and $d\left(x,y\right)=0$ if and only if $x=y$.

Claim: $d$ is a metric on $\left\{0,1\right\}^\mathbb{N}$.
Proof: Let $x=\left(x_1,x_2,\dots\right)$, $y=\left(y_1,y_2,\dots\right)$, and $z=\left(z_1,z_2,\dots\right)$ be elements of $\left\{0,1\right\}^\mathbb{N}$.
It follows by definition that $d\left(x,y\right)=0$ if and only if $x=y$. On the other hand, suppose that $x\neq y$. Then there exists an $n\in\mathbb{N}$ such that $d\left(x,y\right)=1/2^n>0$. Therefore, $d\left(x,y\right)\geqslant0$ for all $x,y\in\left\{0,1\right\}^\mathbb{N}$.
Suppose that $x=y$. Then $d\left(x,y\right)=0=d\left(y,x\right)$. On the other hand, suppose that $x\neq y$. Then there exists an $n\in\mathbb{N}$ such that $d\left(x,y\right)=1/2^n$. Since $\min\left\{i\in\mathbb{N}:x_i\neq y_i\right\}=n=\min\left\{i\in\mathbb{N}:y_i\neq x_i\right\}$, it follows that $d\left(x,y\right)=d\left(y,x\right)$. Therefore, $d\left(x,y\right)=d\left(y,x\right)$ for all $x,y\in\left\{0,1\right\}^\mathbb{N}$.
Suppose that $x=z$. Then $d\left(x,z\right)=0\leqslant d\left(x,y\right)+d\left(y,z\right)$ since $d$ maps to nonnegative values. On the other hand, suppose that $x\neq z$. Then there exists an $n\in\mathbb{N}$ such that $d\left(x,z\right)=1/2^n$. Suppose that $x=y$. Then $d\left(x,z\right)=d\left(y,z\right)\leqslant d\left(x,y\right)+d\left(y,z\right)$. On the other hand, suppose that $x\neq y$. Then there exists an $m\in\mathbb{N}$ such that $d\left(x,y\right)=1/2^m$. Suppose that $m\leqslant n$. Then $d\left(x,z\right)=1/2^n\leqslant1/2^m=d\left(x,y\right)\leqslant d\left(x,y\right)+d\left(y,z\right)$. On the other hand, suppose that $m>n$. Then $\min\left\{i\in\mathbb{N}:y_i\neq z_i\right\}=n$, which implies that $d\left(x,z\right)=d\left(y,z\right)\leqslant d\left(x,y\right)+d\left(y,z\right)$. Therefore, $d\left(x,z\right)\leqslant d\left(x,y\right)+d\left(y,z\right)$ for all $x,y,z\in\left\{0,1\right\}^\mathbb{N}$.
Hence, $d$ is a metric on $\left\{0,1\right\}^\mathbb{N}$. $\square$

Claim: The topology induced by $d$ is the same as the product topology on $\left\{0,1\right\}^\mathbb{N}$.
Proof: Let $x=\left(x_1,x_2,\dots\right)\in\left\{0,1\right\}^\mathbb{N}$, let $\varepsilon>0$, and let $B=B_d\left(x,\varepsilon\right)$. Then $x\in B$. Let $N$ be the smallest positive integer such that $1/2^N<\varepsilon$, and let $B'=\left\{x_1\right\}\times\left\{x_2\right\}\times\cdots\times\left\{x_{N-1}\right\}\times\left\{0,1\right\}\times\left\{0,1\right\}\times\cdots$. Then $x\in B'$. Let $y\in B'$. Then $d\left(x,y\right)\leqslant1/2^N<\varepsilon$, which implies that $y\in B$, which in turn implies that $B'\subseteq B$.
Conversely, let $N$ be a positive integer greater than $1$, and let $B'=\left\{x_1\right\}\times\left\{x_2\right\}\times\cdots\times\left\{x_{N-1}\right\}\times\left\{0,1\right\}\times\left\{0,1\right\}\times\cdots$. Then $x\in B'$. Let $\varepsilon=1/2^{N-1}$, and let $B=B_d\left(x,\varepsilon\right)$. Then $x\in B$. Let $y\in B$. Then $d\left(x,y\right)<\varepsilon=1/2^N$, which implies that $y\in B'$, which in turn implies that $B\subseteq B'$.
Therefore, since each $B$ is a basis element of the topology induced by $d$, and since each $B'$ is a basis element of the product topology on $\left\{0,1\right\}^\mathbb{N}$, it follows from Lemma $13.3$ of Munkres that the topologies generated by their respective bases are the same. $\square$

Claim: $\left(\left\{0,1\right\}^\mathbb{N},d\right)$ is complete.
Proof: Let $\left\{x_i\right\}_{i=1}^\infty$ be a Cauchy sequence of elements $x_i=\left(x_{i,1},x_{i,2},\cdots\right)\in\left\{0,1\right\}^\mathbb{N}$ and let $\varepsilon>0$. Then there exists an $N\in\mathbb{N}$ such that for all integers $m,n\geqslant N$, it is the case that $d\left(x_m,x_n\right)\leqslant1/2^k<\min\left\{1/2,\varepsilon\right\}$, where $k$ is the smallest positive integer for which this inequality holds. This implies that $x_{m,j}=x_{n,j}$ for all positive integers $j<k$ and all integers $m,n\geqslant N$. Therefore, $\left\{x_{i,j}\right\}_{i=1}^\infty$ converges because it is eventually constant. Let $\lim_{i\to\infty}x_{i,j}=y_j$. Since $k$ approaches infinity as $\varepsilon$ approaches zero, it is the case that for any $j\in\mathbb{N}$, one can find an $\varepsilon>0$ such that $j<k$, which implies that $y_j$ exists for all $j\in\mathbb{N}$. Therefore, $y=\left(y_1,y_2,\cdots\right)\in\left\{0,1\right\}^\mathbb{N}$.
It suffices to show that $\left\{x_i\right\}_{i=1}^\infty$ converges to $y$: Let $\varepsilon>0$. If $\varepsilon>1/2$, then $d\left(x_i,y\right)\leqslant1/2<\varepsilon$ for all $i\in\mathbb{N}$. Therefore, suppose that $0<\varepsilon\leqslant1/2$ and let $k$ be the smallest positive integer such that $1/2^k<\varepsilon$. Then for all positive integers $j<k$, there exists an $N_j\in\mathbb{N}$ such that $\left\{x_{i,j}\right\}_{i=N_j}^\infty$ is constant—with this constant value being equal to $y_j$. Let $N=\max\left\{N_1,N_2,\dots,N_{k-1}\right\}$. Then $d\left(x_i,y\right)\leqslant1/2^k<\varepsilon$ for all $i\geqslant N$, which implies that $\left\{x_i\right\}_{i=1}^\infty$ converges to $y$. Hence, $\left(\left\{0,1\right\}^\mathbb{N},d\right)$ is complete. $\square$

The shift map is the function $\sigma:\left\{0,1\right\}^\mathbb{N}\to\left\{0,1\right\}^\mathbb{N}$ defined by $\sigma\left(\left(x_1,x_2,x_3,\dots\right)\right)=\left(x_2,x_3,x_4,\dots\right)$.

Claim: The shift map is uniformly continuous.
Proof: Let $\varepsilon>0$, let $k$ be the smallest positive integer such that $1/2^k<\varepsilon$, let $\delta=1/2^{k}$, and let $x,y\in\left\{0,1\right\}^\mathbb{N}$ such that $d\left(x,y\right)<\delta$. This implies that $d\left(x,y\right)\leqslant1/2^{k+1}$, which in turn implies that $x_i=y_i$ for all positive integers $i\leqslant k$. It follows that $\sigma\left(x\right)_i=\sigma\left(y\right)_i$ for all positive integers $i<k$, which implies that $d\left(\sigma\left(x\right),\sigma\left(y\right)\right)\leqslant1/2^k<\varepsilon$. Therefore, the shift map is uniformly continuous. $\square$

Claim: If $\sigma^n\left(x\right)=x$ for some $n\in\mathbb{N}$, then $\sigma^{kn}\left(x\right)=x$ for all $k\in\mathbb{N}$.
Proof: Let $k=1$. Then $\sigma^{kn}\left(x\right)=\sigma^n\left(x\right)=x$. Assume that $\sigma^{kn}\left(x\right)=x$ for some $k\in\mathbb{N}$. Then $\sigma^{kn}\left(x\right)=\sigma^{kn}\left(\sigma^n\left(x\right)\right)=\sigma^{kn+n}\left(x\right)=\sigma^{\left(k+1\right)n}\left(x\right)=x$. Therefore, by the principle of mathematical induction, it is the case that $\sigma^{kn}\left(x\right)=x$ for all $k\in\mathbb{N}$. $\square$

Claim: A point $x\in\left\{0,1\right\}^\mathbb{N}$ has period $n\in\mathbb{N}$ if and only if $x$ is a repeating sequence of length $n$.
Proof: Assume that $x$ is a repeating sequence of length $n\in\mathbb{N}$. Then $x=\left(x_1,x_2,\dots,x_n,x_1,x_2,\dots,x_n,\dots\right)$. Observe that $\sigma^n\left(x\right)=\left(x_1,x_2,\dots,x_n,x_1,x_2,\dots,x_n,\dots\right)=x$. Therefore, $x$ has period $n$.
Conversely, assume that $x=\left(x_1,x_2,\dots\right)$ has period $n\in\mathbb{N}$. Then $\sigma^n\left(x\right)=x$, which implies that $x_i=x_{n+i}$ for all positive integers $i\leqslant n$. By the previous claim, it is the case that $x_i=x_{kn+i}$ for all $k\in\mathbb{N}$. Therefore, $x$ has the following form:$$\begin{align*}x&=\left(x_1,x_2,\dots,x_n,x_{n+1},x_{n+2},\dots,x_{n+n},\dots,x_{kn+1},x_{kn+2},\dots,k_{kn+n},\dots\right)\\&=\left(x_1,x_2,\dots,x_n,x_1,x_2,\dots,x_n,\dots\right),\end{align*}$$which implies that $x$ is a repeating sequence of length $n$. $\square$

Claim: The set $S$ of periodic points in $\left\{0,1\right\}^\mathbb{N}$ is countable.
Proof: Let $S_n$ be the set of elements of $S$ having period $n\in\mathbb{N}$. Then $\left|S_n\right|=2^n$ and$$S=\bigcup_{n=1}^\infty S_n,$$which is a countable union of finite sets, which implies that $S$ is countable. $\square$

Claim: The set $S$ of periodic points in $\left\{0,1\right\}^\mathbb{N}$ is dense in $\left\{0,1\right\}^\mathbb{N}$.
Proof: Clearly, $\overline{S}\subseteq\left\{0,1\right\}^\mathbb{N}$ since $S\subseteq\left\{0,1\right\}^\mathbb{N}$ and $\left\{0,1\right\}^\mathbb{N}$ is complete. On the other hand, let $x\in\left\{0,1\right\}^\mathbb{N}$. If $x$ is periodic, then $x\in S\subseteq\overline{S}$. Otherwise, let $U=U_1\times U_2\times\cdots\times U_n\times\left\{0,1\right\}\times\left\{0,1\right\}\times\cdots$ be a neighborhood of $x$ and define $y=\left(y_1,y_2,\dots,y_n,y_1,y_2,\dots,y_n,\dots\right)$, where $y_i\in U_i$ for all positive integers $i\leqslant n$. Then $y\neq x$ and $y\in U\cap S$, which implies that $x$ is a limit point of $S$. Therefore, $x\in\overline{S}$, which implies that $\left\{0,1\right\}^\mathbb{N}\subseteq\overline{S}$. Hence, $\overline{S}=\left\{0,1\right\}^\mathbb{N}$, which implies that $S$ is dense in $\left\{0,1\right\}^\mathbb{N}$. $\square$

Claim: The shift map has sensitive dependence on initial conditions.
Proof: Let $\beta=1/2$, let $x\in\left\{0,1\right\}^\mathbb{N}$, let $\varepsilon>0$, let $n$ be the smallest positive integer such that $1/2^n<\varepsilon$, and let $y\in B_d\left(x,\varepsilon\right)$. Then $d\left(x,y\right)\leqslant1/2^n<\varepsilon$, which implies that $x=y$ or $m=\min\left\{i\in\mathbb{N}:x_i\neq y_i\right\}\geqslant n$. Let $y$ be such that $m=n+1$. Then $d\left(\sigma^n\left(x\right),\sigma^n\left(y\right)\right)=1/2=\beta$. Therefore, $\sigma$ has sensitive dependence on initial conditions. $\square$

An Elementary Graph-Theoretic Result

Let us begin this blog entry with a handful of fairly intuitive and fun definitions.

A graph $G$ is an ordered pair $\left(V,E\right)$ of vertices $V$ and edges $E$, where $V$ is a set, and $E$ is a set containing $2$-element subsets of $V$.

For example, if $V=\left\{1,2,3,4\right\}$, and $E=\left\{\left\{1,3\right\},\left\{2,3\right\},\left\{2,4\right\}\right\}$, then $G=\left(V,E\right)$ is the following graph:

Figure 1

A subgraph of a graph $G=\left(V,E\right)$ is a graph $G'=\left(V',E'\right)$ where $V'\subseteq V$ and $E'\subseteq E$.

For example, the following graph:

Figure 2

is a subgraph of the graph in Figure 1.

A walk is a sequence $v_1,e_1,v_2,e_2,\dots,v_{n-1},e_{n-1},v_n$ of vertices $v_i$ and edges $e_i$. It can be abbreviated as: $v_1,v_2,\dots,v_n$, leaving out the edges.

A closed walk is a walk where $v_1=v_n$.

For example, the graph in Figure 1 has the following walk:
$$1,\left\{1,3\right\},3,\left\{3,2\right\},2,\left\{2,4\right\},4,\tag{$1$}$$
which is not closed since $1\neq4$.

A graph is connected if all of its vertices can be connected by one walk.

For example, the graph in Figure 1 is connected because $\left(1\right)$ is a walk that connects all of its vertices. On the other hand, the following graph:

Figure 3

is not connected because, say, vertex $2$ cannot be connected to, say, vertex $5$.

The degree of a vertex is the number of edges connected to it.

For example, in Figure 3, vertex $1$ has degree $1$ because edge $\left\{1,3\right\}$ is the only edge connected to it. Similarly, vertex $2$ has degree $2$ because edges $\left\{2,3\right\}$ and $\left\{2,4\right\}$ are the only edges connected to it.

An Euler cycle in a graph $G$ is a closed walk passing through all of the edges of $G$ exactly once.

For example, if we add edge $\left\{1,4\right\}$ to the graph $G$ in Figure 1, obtaining the following graph:

Figure 4

then 
$$1,\left\{1,3\right\},3,\left\{2,3\right\},2,\left\{2,4\right\},4,\left\{1,4\right\},1\tag{$2$}$$
is an Euler cycle because it passes through all of the edges of $G$ exactly once. Note that the graph in Figure 3 cannot have an Euler cycle because it is not connected. Moreover, the following graph:

Figure 5

has a walk
$$4,\left\{3,4\right\},3,\left\{1,3\right\},1,\left\{1,2\right\},2,\left\{2,3\right\},3,\left\{3,4\right\},4,\tag{$3$}$$
which is closed but not an Euler cycle because it passes through edge $\left\{3,4\right\}$ more than once.

With all of that said, we have arrived at the purpose of this blog entry: proving the following theorem:

Theorem 1. If $G$ is a connected graph, then $G$ has an Euler cycle if and only if its vertices have even degree.

However, before we prove it, we must prove the following lemma:

Lemma 1. If a graph has an Euler cycle, then its vertices have even degree.

Proof. Suppose that the graph has a vertex $v$ of odd degree $d$. Then the cycle $c$ either starts or ends at $v$ or neither starts nor ends at $v$. If $c$ neither starts nor ends at $v$, then $c$ passes through $v$ via two unique edges, implying that $d\geqslant3$. Therefore, $c$ must pass through $v$ again via two new, unique edges, implying that $d\geqslant5$. Clearly, this process will never end, so $c$ must start or end at $v$. If $c$ starts or ends at $v$, then it must end or start at $v$, respectively, implying that $d\geqslant3$. Therefore, $c$ must pass through $v$, implying that $d\geqslant5$. As before, this process will never end. Hence, by contradiction, the graph has no vertex of odd degree. $\blacksquare$

We are now ready to tackle the theorem:

Proof (Theorem 1). Suppose that $G$ has an Euler cycle. Then, by Lemma 1, the vertices of $G$ have even degree. Conversely, suppose that the vertices of $G$ have even degree, pick any vertex $v_0$, and create a closed walk $v_0,v_1,\dots,v_n=v_0$ that passes through all of the edges connected to $v_0$ and that passes through all of its edges exactly once (convince yourself that this is possible). We will proceed by induction. If the walk passes through all of the edges of $G$, then we are done. Otherwise, consider the subgraph of $G$ obtained by removing the vertex $v_0$ and by removing all of the edges of $G$ that are in the walk. Then the resulting subgraph comprises a further set of subgraphs $G_1,\dots,G_k$ whose vertices have even degree. By the induction hypothesis, each $G_i$ has an Euler cycle, call it $v_{i,1},v_{i,2},\dots,v_{i,n_i}=v_{i,1}$. Finally,
$$\begin{align}&v_0,v_1,\dots,v_{1,1},v_{1,2},\dots,v_{1,n_1}=v_{1,1},\\&v_{1,n_1+1},\dots,v_{2,1},v_{2,2},\dots,v_{2,n_2}=v_{2,1},\\&v_{2,n_2+1},\dots,v_{k,1},v_{k,2},\dots,v_{k,n_k}=v_{k,1},\\&v_{k,n_k+1},\dots,v_n=v_0.\end{align}$$
is an Euler cycle (we assume WLOG that $v_{i,1}$ comes before $v_{i+1,1}$ in the walk). $\blacksquare$

This answers a question that Leonhard Euler asked in 1736.

Smooth Manifold Theory Final Interesting Question

I just took my smooth manifold theory final and would like to go over one of its most interesting questions.

Let $M$ and $N$ be smooth manifolds and let $f:M\to N$ be a covering projection.

Claim: If $N$ is orientable, then $M$ is orientable.

Proof: Let $p\in N$. Since $N$ is orientable, it is the case that $p$ is in the domain of an oriented local frame $\left(\sigma_i\right)$ defined on an open subset $U$ of $N$. Since $f$ is a covering projection, it is the case that $f^{-1}\left(U\right)$ is an open subset of $M$ for which there exists a countable, open, pairwise disjoint cover $\left\{V_\alpha\right\}_{\alpha\in A}$ such that each $\left.f\right|_{V_\alpha}:V_\alpha\to U$ is a diffeomorphism. Therefore, each $\left(\left.f\right|_{V_\alpha}\right)_*:T_qV_\alpha\to T_{\left.f\right|_{V_\alpha}\left(q\right)}U$ is an isomorphism for all $q\in V_\alpha$, which implies that $\left(\left(\left.f\right|_{V_\alpha}\right)^*\sigma_i\right)$ is an oriented local frame defined on $V_\alpha$. $\blacksquare$

Recall that $\left(\sigma_i\right)$ is an ordered $n$-tuple, taking $N$ to be $n$-dimensional, of sections of the tangent bundle $\pi_N:TN\to N$ of $N$, such that $\left(\left.\sigma_i\right|_p\right)$ is a basis for $E_p=\pi^{-1}_N\left(p\right)$ for all $p\in U$.

Although I am not yet certain that this is a valid proof, I am pretty confident about it.

The converse of the claim does not hold because $f$ need not be surjective, in which case $M$ and $f\left(M\right)$ may be orientable but $N$ might not. For example, let $N=\mathbb S^2/\sim$, let $M=N\setminus\left\{\text{north and south poles}\right\}$, and let $f:M\hookrightarrow N$ be the inclusion map, where $\sim$ is the usual antipodal equivalence. Then $f$ is a local diffeomorphism, $M$ is orientable, and $N$ is not orientable.

I enjoyed this class way too much.

Tensor Products: Basics

Let $V_1,\dots,V_k$ and $W$ be vector spaces. A map $F:V_1\times\cdots\times V_k\to W$ is multilinear if $$F\left(v_1,\dots,av_i+a'v_i',\dots,v_k\right)=aF\left(v_1,\dots,v_i,\dots,v_k\right)+a'F\left(v_1,\dots,v_i',\dots,v_k\right).$$ A multilinear function of two variables is bilinear. Let $V$ be a finite-dimensional real vector space and let $k$ be a natural number. A covariant $k$-tensor on $V$ is a multilinear function $$T:\underbrace{V\times\cdots\times V}_\text{$k$ copies}\to\mathbb R.$$ The number $k$ is called the rank of $T$. The set of all covariant $k$-tensors on $V$, denoted $T^k\left(V\right)$, is a vector space under the operations of pointwise addition and scalar multiplication. Let $V$ be a finite-dimensional real vector space, let $S\in T^k\left(V\right)$, let $T\in T^l\left(V\right)$, and define a map $$S\otimes T:\underbrace{V\times\cdots\times V}_\text{$k+l$ copies}\to\mathbb R$$ by $$S\otimes T\left(X_1,\dots,X_{k+l}\right)=S\left(X_1,\dots,X_k\right)T\left(X_{k+1},\dots,X_{k+l}\right).$$ Then $S\otimes T$ is a covariant $\left(k+l\right)$-tensor called the tensor product of $S$ and $T$.

Proposition. Let $V$ be a real vector space of dimension $n$, let $\left(E_i\right)$ be any basis for $V$, and let $\left(\varepsilon^i\right)$ be the dual basis. The set of all $k$-tensors of the form $\varepsilon^{i_1}\otimes\cdots\otimes\varepsilon^{i_k}$ for $1\leqslant i_1,\dots,i_k\leqslant n$ is a basis for $T^k\left(V\right)$, which therefore has dimension $n^k$.

Proof. Let $\mathcal B:=\left\{\varepsilon^{i_1}\otimes\cdots\otimes\varepsilon^{i_k}:1\leqslant i_1,\dots,i_k\leqslant n\right\}$, let $T\in T^k\left(V\right)$, and, for any $k$-tuple $\left(i_1,\dots,i_k\right)$ of integers $1\leqslant i_j\leqslant n$, let $$T_{i_1\dots i_k}:=T\left(E_{i_1},\dots,E_{i_k}\right).$$ Then $$\begin{align}T_{i_1\dots i_k}\varepsilon^{i_1}\otimes\varepsilon^{i_k}\left(E_{j_1},\dots,E_{j_k}\right)&=T_{i_1\dots i_k}\varepsilon^{i_1}\left(E_{j_1}\right)\cdots\varepsilon^{i_k}\left(E_{j_k}\right)\\&=T_{i_1\dots i_k}\delta_{j_1}^{i_1}\cdots\delta_{j_k}^{i_k}\\&=T_{j_1\dots j_k}\\&=T\left(E_{j_1},\dots,E_{j_k}\right).\end{align}$$ Suppose $$T_{i_1\dots i_k}\varepsilon^{i_1}\otimes\varepsilon^{i_k}=0.$$ Applying this to any $\left(E_{j_1},\dots,E_{j_k}\right)$ yields that $T_{j_1\dots j_k}=0$. $\blacksquare$

Note: Einstein's summation convention is used above.

Here is a more abstract (and more insane) definition of tensor product:

Let $S$ be a set. The free vector space on $S$, denoted $\mathbb R\langle S\rangle$, is the set of all finite formal linear combinations of elements of $S$ with real coefficients. More precisely, a finite formal linear combination is a $\mathcal F:S\to\mathbb R$ such that $\mathcal F=0$ for all but finitely-many elements of $S$. Under pointwise addition and scalar multiplication, $\mathbb R\langle S\rangle$ is a real vector space. Identifying each $x\in S$ with the function that takes the value $1$ on $x$ and $0$ elsewhere, any $\mathcal F\in\mathbb R\langle S\rangle$ can be written uniquely in the form $F=\sum_{i=1}^ma_ix_i$, where $x_1,\dots,x_m$ are the elements of $S$ for which $F\left(x_i\right)=a_i$. Thus $S$ is a basis for $\mathbb R\langle S\rangle$.

Let $V$ and $W$ be finite-dimensional real vector spaces and let $\mathcal R$ be the subspace of $\mathbb R\langle V\times W\rangle$ spanned by elements of the form $$\begin{align}a\left(v,w\right)&-\left(av,w\right),\\a\left(v,w\right)&-\left(v,aw\right),\\\left(v,w\right)+\left(v',w\right)&-\left(v+v',w\right),\\\left(v,w\right)+\left(v,w'\right)&-\left(v,w+w'\right).\end{align}$$ The tensor product of $V$ and $W$, denoted $V\otimes W$, is $\mathbb R\langle V\times W\rangle/\mathcal R$ and the equivalence class of an element $\left(v,w\right)$ in $V\otimes W$, denoted $v\otimes w$, is the tensor product of $v$ and $w$.

Proposition (Characteristic Property of Tensor Products). Let $V$ and $W$ be finite-dimensional real vector spaces. If $A:V\times W\to X$ is bilinear into a vector space $X$, then there is a unique linear map $\tilde A:V\otimes W\to X$ such that the following diagram commutes:


where $\pi\left(v,w\right)=v\otimes w$.

Celebrating Square Root Day?

Today is square root day. What a weird day. To "celebrate," I will prove that consecutively applying the square root operator to any number greater than or equal to one will yield a number increasingly closer to one.

In other words, if $x_1\geqslant1$ is any number, and if $x_{n+1}=x_n^{1/2}$ for all integers $n$ greater than zero, then $\lim_{n\to\infty}x_n=1$.

Let $f:\left[1,\infty\right)\to\left[1,\infty\right)$ such that $x\mapsto x^{1/2}$, and let $x,y\in\left[1,\infty\right)$. Then
$$\left|x^{1/2}-y^{1/2}\right|=\frac{1}{\left|x^{1/2}+y^{1/2}\right|}\left|x-y\right|\leqslant\frac12\left|x-y\right|,$$
which implies that $f$ is Lipschitz with constant $1/2$, which in turn implies that $f$ is a contraction on $\left[1,\infty\right)$. Moreover, observe that $f\left(1\right)=1$, which implies that $1$ is a fixed point of $f$. Since $\left[1,\infty\right)$ equipped with the usual euclidean metric is a complete metric space, our sequence $\left\{x_n\right\}_{n=1}^\infty$ converges to $1$ for any $x_1\in\left[1,\infty\right)$.

If pigs can fly, then I am Napoleon.

One of the easiest things that you can do is argue on a false premise; if a hypothesis is false, then its conclusion can be whatever you wish. This explains my title.

For a more concrete example, consider the following hypothesis: "$\infty$ is a number." It is clearly false, but assume that you think that it is true. Then $1+\infty=\infty$, which implies that $1=0$ (after subtracting $\infty$ from both sides). Therefore, we have shown that $1=0$, which in turn implies that any two numbers $x$ and $y$ are the same (use the following map: $\left(y-x\right)\cdot1+x=x$, where it is assumed, without loss of generality, that $y\geqslant x$). In other words, if we allow $\infty$ to be a number, then only one number will exist, which is absurd.

Real debaters defend the veracity of their arguments with irrefutable truths of which, hopefully, the opposition is ignorant. Fake debaters futilely exchange unsubstantiated (at best) biases ad nauseam. When you argue, first make sure that your premises are actually true and not a whim.

Geometry Headaches: "Abstract Nonsense"

Geometry has recently been giving me a slight headache, so I will write about it like I usually do in this situation. I will first write about the abstract formulation of tangent vectors.

Let $M$ be a smooth manifold, and let $p\in M$. A derivation at $p$ is a function $v:C^\infty\left(M\right)\to\mathbb R$ such that for all $f,g\in C^\infty\left(M\right)$, it is the case that
$$v\left(fg\right)=f\left(p\right)vg+g\left(p\right)vf.$$
The tangent space to $M$ at $p$, denoted $T_pM$, is the set of all derivations at $p$. Strangely, an element of $T_pM$ is called a tangent vector at $p$.

Define the geometric tangent space to $\mathbb R^n$ at $a$, denoted $\mathbb R_a^n$, to be a sort of copy of $\mathbb R^n$ but with its origin at $a\in\mathbb R^n$. Formally,
$$\mathbb R^n_a=\left\{a\right\}\times\mathbb R^n.$$
This is so that $\mathbb R^n_a\cap\mathbb R^n_b=\varnothing$ whenever $a\neq b$. For an element $\left(a,v\right)\in\mathbb R^n_a$, write $v_a$ or $\left.v\right|_a$ instead, and call it a geometric tangent vector at $a$.

Let $f\in C^\infty\left(\mathbb R^n\right)$, and recall that the derivative of $f$ in the direction of some $v\in\mathbb R^n$ at some $a\in\mathbb R^n$, denoted $\left.D_v\right|_af$, satisfies
$$\left.D_v\right|_af=\left.\frac d{dt}\right|_{t=0}f\left(a+tv\right).$$
It goes without saying (from calculus) that $\left.D_v\right|_a$ is linear and is a derivation at $a$.

Surprisingly, $\mathbb R^n_a$ and $T_a\mathbb R^n$ are isomorphic (with the map $v_a\mapsto\left.D_v\right|_a$).

Let $M$ and $N$ be smooth manifolds, and let $F:M\to N$ be a smooth map. For each $p\in M$, define a map
$$dF_p:T_pM\to T_{F\left(p\right)}N$$
and call it the differential of $F$ at $p$. Let $v\in T_pM$, let $f\in C^\infty\left(N\right)$, and define
$$dF_p\left(v\right)\left(f\right)=v\left(f\circ F\right).$$
We are now almost ready to talk about differentiating maps between manifolds.