Differential Geometry Part 1

To review for my upcoming final in differential geometry, I will go over my last two midterms.

1. Parametrize the curve $y^2+y=x^2$ using as the parameter $t$ slope of the line from the origin to a point on it, i.e. $y=tx$.

The first thing we should do is substitute $tx$ for $y$ in the first equation to obtain $$ \left(t^2-1\right)x^2+tx=0. $$ By the quadratic equation, we find that $$ x=-\frac{t}{t^2-1}\quad\text{or}\quad x=0. $$ To find $y$ in terms of $t$, we substitute this $x$ into the second equation to obtain $$ y=-\frac{t^2}{t^2-1}\quad\text{or}\quad y=0. $$ Therefore, the parametric equation of this curve is $$ r\left(t\right)=\begin{cases} -\frac{t}{t^2-1}\partial_x+-\frac{t^2}{t^2-1}\partial_y & \text{ if } t\in\mathbb{R}\setminus\{-1,1\}, \\ 0\partial_x+0\partial_y & \text{ if } t\in\{-1,1\}. \end{cases} $$ 2. The evolute of the parabola $y=x^2/2$ is $y=1+3/2x^{2/3}$. How many normals can be drawn to this parabola from $\left(1,-1\right)$? Explain.

Substituting this point into the evolute's equation yields $$ y\left(1\right)=\frac52>-1. $$ In other words, the point is below the evolute.

Some time ago, we concluded that under the evolute or on its cusps, only one normal can be drawn; above the evolute, only three normals can be drawn; and on the evolute, only two normals can be drawn.

Therefore, only one normal can be drawn.

3. Consider the curve $r\left(t\right)=e^t\left(\sin t-\cos t\right)\partial_x+e^t\left(\sin t+\cos t\right)\partial_y$ with $t\in\mathbb R$.

a. Find the speed.

The speed is defined as follows: $$ v=\left|r'\right|. $$ Therefore, we have that $$\begin{align} r'\left(t\right)&=2e^t\sin t\partial_x+2e^t\cos t\partial_y,\\ \left|r'\right|&=2e^t=v. \end{align}$$ b. Find a natural parameter.

The natural parameter is defined as follows: $$ s=\int v\,dt. $$ Therefore, we have that $$ s=2e^t+c. $$ c. Find involutes.

Involutes are defined as follows: $$ h=r-T\int v\,dt. $$ Therefore, we have that $$\begin{align} T&=\frac{r'}{v}=\sin t\partial_x+\cos t\partial_y,\\ h&=-\left[\left(e^t+c\right)\sin t+e^t\cos t\right]\partial_x+\left[e^t\sin t-\left(e^t+c\right)\cos t\right]\partial_y \end{align}$$ 4. Consider the curve $r\left(t\right)=t\partial_x+\ln\cos t\partial_y$ with $t\in\left(-\pi/2,\pi/2\right).$

a. Find the speed.
Hint: $1+\tan^2t=\sec^2t$.

As before, we have that $$\begin{align} r'\left(t\right)&=\partial_x-\frac{1}{\cos t}\sin t=\partial_x-\tan t\partial_y,\\ v&=\sqrt{1+\tan^2t}=\sqrt{\sec^2t}=\sec t. \end{align}$$ b. Find the unit tangent.

As before, we have that $$ T=\cos t\partial_x-\sin t\partial_y. $$ c. Find the right unit normal.

The right unit normal is defined as follows: $$ \tilde N=-T_2\partial_x+T_1\partial_y. $$ Therefore, we have that $$ \tilde N=\sin t\partial_x+\cos t\partial_y. $$ d. Find the signed curvature.

The signed curvature is defined as follows: $$ \tilde\varkappa=\frac{r'\times r''}{v^3} $$ Therefore, we have that $$\begin{align} r''\left(t\right)&=-\sec^2t\partial_y,\\ r'\times r''&=\begin{vmatrix} 1 & -\tan t\\ 0 & -\sec^2t \end{vmatrix}=-\sec^2t,\\ \tilde\varkappa&=-\cos t. \end{align}$$ e. Find the principal unit normal.

The principal unit normal is defined as follows $$ N=\text{sign}\left(\tilde\varkappa\right)\tilde N. $$ Therefore, we have that $$ N=-\sin t\partial_x-\cos t\partial_y. $$ f. Find the evolute.

The evolute is defined as follows: $$ c=r+\frac{1}{\tilde\varkappa}\tilde N. $$ Therefore, we have that $$ c=\left(t-\tan t\right)\partial_x+\left(\ln\cos t-1\right)\partial_y. $$ 5. Fill in the blanks to form a true statement.

a. If the speed is $3$ and $T=\partial_x$, then the velocity is "$3\partial_x$."

b. Natural parameter is unique up to "shift and sign change."

c. A plane has "two" unit normals.

d. Inflection point is where a curve "has $\varkappa=0$."

e. If the curvature is $\varkappa=e^{2t}+1$ and the natural parameter is $s=e^t$, then the natural equation of the curve is "$\varkappa\left(s\right)=s^2+1$."

f. Lines and circles are the only plane curves that "have constant curvature."

g. The unit tangent to a curve is drawn below, draw and label $N$ and $\tilde N$.

h. Tangents to a plane curve are normal to "its involutes."

i. Wavefronts of a plane curve have cusps on "its evolute."

j. Involute of the evolute is "a wavefront of the plane curve."

6. Prove that $\ddot T\cdot T=-\varkappa^2$, where $T$ is the unit tangent and $\varkappa$ is the curvature of a curve.
Hint: Differentiate $\dot T\cdot T$.

Proof. $\dot T=\varkappa N\Longrightarrow\dot T\cdot T=\varkappa N\cdot T=0$. $$\begin{align} \frac{d}{dt}\left(\dot T\cdot T\right)&=\ddot T\cdot T+\dot T\cdot\dot T=\ddot T\cdot T+\left|\dot T\right|^2\\ &=\ddot T\cdot T+\left|\varkappa N\right|^2=\ddot T\cdot T+\varkappa^2=0\\ &\Longrightarrow\ddot T\cdot T=-\varkappa^2.\qquad\square \end{align}$$ 7. Let $r\left(t\right)$, $w\left(t\right)$ be two curves that are normal to the segment connecting them at each point, i.e. $r'\perp\left(w-r\right)$ and $w'\perp\left(w-r\right)$. Prove that they are equidistant, i.e. $\left|w-r\right|=\text{const}$.
Hint: Differentiate $\left|w-r\right|^2=\left(w-r\right)\cdot\left(w-r\right)$.

Proof. $$\begin{align} \frac d{dt}\left[\left|w-r\right|^2\right]&=\frac d{dt}\left[\left(w-r\right)\cdot\left(w-r\right)\right]\\ &=2\left(w'-r'\right)\cdot\left(w-r\right)\\ &=2\left[w'\cdot(w-r)-r'\cdot(w-r)\right]=0.\qquad\square \end{align}$$

Plane Curves

Consider the equation of a circle $$ x^2+y^2=R^2, $$ where $R$ is its radius. How do we parametrize it?

Recall that a function $f(t)$ can be parametrized an infinite number of ways. Moreover, the standard parametrization is $$ r(t)=t\partial_x+f(t)\partial_y. $$ However, the most common way to parametrize our circle is as follows: $$ r(t)=R(\sin t\partial_x+\cos t\partial_y), $$ where $t\in[0,2\pi).$

Note: this is not its standard parametrization.

If we want to find its perimeter, then we compute the following integral: $$\begin{align} P&=\int_{0}^{2\pi}|r'(t)|\,dt\\ &=\int_{0}^{2\pi}\sqrt{R^2\cos^2t+R^2\sin^2t}\,dt\\ &=2\pi R, \end{align}$$ which should be familiar to you if you have some basic knowledge in geometry.

Finally, we can use the indefinite form of the above integral to express our parametric equation in terms of its arc length $s$, as follows: $$ r(s)=R\left(\sin\frac{s}{R}\partial_x+\cos\frac{s}{R}\partial_y\right). $$ If we wish to compute the curvature $\varkappa$ of the circle, then we can use the above natural parametrization as follows: $$\begin{align} T&=\dot r\\ &=\cos\frac{s}{R}\partial_x-\sin\frac{s}{R}\partial_y,\\ \varkappa&=|\dot T|\\ &=\left|\frac{1}{R}\left(-\sin\frac{s}{R}\partial_x-\cos\frac{s}{R}\partial_y\right)\right|\\ &=\frac{1}{R}. \end{align}$$ This makes sense since the circle is the only plane curve with constant curvature.

Fourier Transforms

Let $f\left(x\right)$ be piecewise smooth, and $$ \int_{-\infty}^\infty|f\left(x\right)|\,dx<\infty. $$ The Fourier transform and inverse Fourier transform of $f$ are respectively defined as follows: $$\begin{align} F\left(\omega\right)&\equiv\frac1{2\pi}\int_{-\infty}^\infty f\left(x\right)e^{i\omega x}\,dx,\\ f\left(x\right)&\equiv\int_{-\infty}^\infty F\left(\omega\right)e^{-i\omega x}\,d\omega. \end{align}$$ Together, they are called a Fourier transform pair.

Fourier transforms are frequently used to solve partial differential equations defined over infinite spatial domains. So, let us use this Fourier transform pair to solve the following heat equation $$\begin{align} u_t&=ku_{xx},\qquad -\infty<x<\infty,\\ u\left(x,0\right)&=f\left(x\right). \end{align}$$ To introduce some notation, let $\mathcal F$ and $\mathcal{F}^{-1}$ stand for the Fourier transform and inverse Fourier transform operators, respectively. It follows that $$\begin{align} \mathcal F\left(u_t\right)&=\mathcal F\left(ku_{xx}\right),\\ \frac1{2\pi}\int_{-\infty}^\infty \frac\partial{\partial t}u\left(x,t\right)e^{i\omega x}\,dx&=\frac k{2\pi}\int_{-\infty}^\infty \frac{\partial^2}{\partial x^2}u\left(x,t\right)e^{i\omega x}\,dx,\\ \frac\partial{\partial t}U\left(\omega,t\right)&=-k\omega^2U\left(\omega,t\right). \end{align}$$ This is an ordinary differential equation whose solution is $$ U\left(\omega,t\right)=c\left(\omega\right)\exp\left(-k\omega^2t\right). $$ Applying the Fourier transform to the initial condition yields $$\begin{align} \mathcal F\left[u\left(x,0\right)\right]&=\mathcal F\left[f\left(x\right)\right],\\ U(\omega,0)&=\frac1{2\pi}\int_{-\infty}^\infty f\left(x\right)e^{i\omega x}\,dx\\ &=c\left(\omega\right). \end{align}$$ To be able to apply the inverse Fourier transform to find $u$, we must know what the inverse Fourier transform of a Gaussian is, namely, $$ G(\omega)=\exp\left(-k\omega^2t\right). $$ Applying the definition yields $$\begin{align} \mathcal{F}^{-1}[G(\omega)]&=\mathcal{F}^{-1}\left[\exp\left(-k\omega^2t\right)\right],\\ g(x)&=\int_{-\infty}^\infty\exp\left(-k\omega^2t\right)\exp\left(-i\omega x\right)\,d\omega\\ &=\sqrt{\frac{\pi}{kt}}\exp\left(-\frac{x^2}{4kt}\right). \end{align}$$ We must also know that convolution is defined as follows: $$ F(\omega)G(\omega)=\frac1{2\pi}\int_{-\infty}^\infty f(\bar x)g(x-\bar x)\,d\bar x. $$ Finally, the inverse Fourier transform of $$ U\left(\omega,t\right)=\exp\left(-k\omega^2t\right)c\left(\omega\right) $$ turns out to be the following: $$ u(x,t)=\frac1{2\sqrt{\pi kt}}\int_{-\infty}^\infty\exp\left(-\frac{\bar{x}^2}{4kt}\right)f(x-\bar x)\,d\bar x. $$

Complex Form of Fourier Series

I will talk about Fourier transforms in the next entry. To do so, I will first introduce a way to convert our Fourier series currently defined in terms of sines and cosines into Fourier series defined in terms of complex exponentials.

First of all, recall that $$ f(x)\sim a_0+\sum_{n=1}^{\infty}\left(a_n\cos\frac{n\pi x}L+b_n\sin\frac{n\pi x}L\right),\tag{1} $$ where, by orthogonality principles, $$\begin{align} a_0&=\frac1{2L}\int_{-L}^Lf(x)\,dx,\\ a_n&=\frac1L\int_{-L}^Lf(x)\cos\frac{n\pi x}L\,dx,\\ b_n&=\frac1L\int_{-L}^Lf(x)\sin\frac{n\pi x}L\,dx. \end{align}$$ Now, recall Euler's formulas: $$ \cos\theta=\frac{e^{i\theta}+e^{-i\theta}}2,\qquad\text{and}\qquad\sin\theta=\frac{e^{i\theta}-e^{-i\theta}}{2i}. $$ It follows that we can rewrite equation $(1)$ as follows: $$ f(x)\sim a_0+\frac12\sum_{n=1}^{\infty}(a_n-ib_n)e^{\frac{n\pi ix}L}+\frac12\sum_{n=1}^{\infty}(a_n+ib_n)e^{-\frac{n\pi ix}L}. $$ Now, let's change the index of summation in the first term from $n$ to $-n$: $$ f(x)\sim a_0+\frac12\sum_{n=-1}^{-\infty}(a_{-n}-ib_{-n})e^{-\frac{n\pi ix}L}+\frac12\sum_{n=1}^{\infty}(a_n+ib_n)e^{-\frac{n\pi ix}L}. $$ It follows from the definition of $a_n$ and $b_n$ that $a_{-n}=a_n$ and $b_{-n}=-b_n$. Therefore, if we let $$\begin{align} c_0&=a_0,\\ c_n&=\frac{a_n+ib_n}2, \end{align}$$ we will have the following Fourier series: $$ f(x)\sim\sum_{n=-\infty}^{\infty}c_ne^{-\frac{in\pi x}{L}}, $$ where the coefficients are $$ c_n=\frac1{2L}\int_{-L}^Lf(x)e^{\frac{in\pi x}L}\,dx. $$

Vibrating String with Fixed Ends

Until now, we have been working only with the heat equation. Today, we are going to solve the one-dimensional wave equation with homogeneous boundary conditions and no sources, namely, $$\begin{align} u_{tt}&=c^2u_{xx},\\ u(0,t)&=0,\quad u(L,t)=0,\\ u(x,0)&=f(x),\quad u_t(x,0)=g(x). \end{align}$$ As before, we want to use the method of separation of variables by setting $u(x,t)=\phi(x)h(t)$ and substituting above: $$ \phi h''=c^2\phi''h. $$ We want to ignore trivial solutions. This implies our boundary conditions are $\phi(0)=0$, and $\phi(L)=0$.

Separating variables yields $$ \frac{\phi''}{\phi}=\frac{h''}{c^2h}\color{blue}{=-\lambda}, $$ Because functions of distinct independent variables can only be equal if they equate to the same constant, we have introduced the equality in blue (the negative sign is purely out of convenience later on).

We now have a system of two ordinary differential equations: $$\begin{align} \phi''&=-\lambda\phi,\tag{1}\\ h''&=-\lambda c^2h.\tag{2} \end{align}$$ Equation $(1)$ has three cases:

  1. $\lambda<0$:
  2. $$ \phi(x)=c_1\sinh\sqrt{-\lambda}x+c_2\cosh\sqrt{-\lambda}x. $$ Applying the boundary conditions to this yields $$\begin{align} \phi(0)&=c_2=0,\\ \phi(L)&=c_1\sinh\sqrt{-\lambda}L=0\Longrightarrow c_1=0. \end{align}$$ This is the trivial solution, so we drop it.
  3. $\lambda=0$:
  4. $$ \phi(x)=c_1x+c_2. $$ Applying the boundary conditions to this yields $$\begin{align} \phi(0)&=c_2=0,\\ \phi(L)&=c_1L=0\Longrightarrow c_1=0. \end{align}$$ This is the trivial solution, so we drop it.
  5. $\lambda>0$:
  6. $$ \phi(x)=c_1\sin\sqrt{\lambda}x+c_2\cos\sqrt{\lambda}x. $$ Applying the boundary conditions to this yields $$\begin{align} \phi(0)&=c_2=0,\\ \phi(L)&=c_1\sin\sqrt{\lambda}L=0\\ &\Longrightarrow\lambda_n=\left(\frac{n\pi}{L}\right)^2,\\ \phi_n(x)&=C_n\sin\frac{n\pi x}{L}. \end{align}$$
Now that we know the value of $\lambda_n$, we can use it to solve equation $(2)$: $$ h_n(t)=A_n\sin \frac{n\pi ct}{L}+B_n\cos \frac{n\pi ct}{L}. $$ Therefore, by the principle of superposition, we find that the solution to this partial differential equation is $$ \color{blue}{u(x,t)=\sum_{n=1}^{\infty}\left(G_n\sin \frac{n\pi ct}{L}+F_n\cos \frac{n\pi ct}{L}\right)\sin\frac{n\pi x}{L}.} $$ The coefficients $F_n$ and $G_n$ can be found by applying the initial conditions: $$ u(x,0)=\sum_{n=1}^{\infty}F_n\sin\frac{n\pi x}{L}=f(x). $$ $$\begin{align} u_t(x,t)&=\sum_{n=1}^{\infty}\frac{n\pi c}{L}\left(G_n\cos \frac{n\pi ct}{L}-F_n\sin \frac{n\pi ct}{L}\right)\sin\frac{n\pi x}{L},\\ u_t(x,0)&=\sum_{n=1}^{\infty}G_n\frac{n\pi c}{L}\sin\frac{n\pi x}{L}=g(x). \end{align}$$ Finally, applying orthogonality principles to the above equations to find $F_n$ and $G_n$ yields $$\begin{align} \color{blue}{F_n}&\color{blue}{=\frac{2}{L}\int_{0}^{L}f(x)\sin\frac{n\pi x}{L}\,dx,}\\ \color{blue}{G_n}&\color{blue}{=\frac{2}{n\pi c}\int_{0}^{L}g(x)\sin\frac{n\pi x}{L}\,dx.} \end{align}$$ Letting $c=1$, $L=1$, $f(x)=\sin x$ and $g(x)=0$ gives us the following wave (the approximation is very low):

Regular Sturm-Liouville Eigenvalue Problem

Definition

A regular Sturm-Liouville eigenvalue problem consists of the Sturm-Liouville differential equation $$ \frac{d}{dx}\left[p(x)\frac{d\phi}{dx}\right]+q(x)\phi+\lambda\sigma(x)\phi=0,\qquad a<x<b, $$ subject to the boundary conditions $$\begin{align} \beta_1\phi(a)+\beta_2\frac{d\phi}{dx}(a)&=0,\\ \beta_3\phi(b)+\beta_4\frac{d\phi}{dx}(b)&=0,\qquad\beta_i\in\mathbb{R}, \end{align}$$ where $p$, $q$, and $\sigma$ are real and continuous, and both $p>0$ and $\sigma>0$. Moreover, only Dirichlet, Neumann and Robin boundary conditions are considered.

Theorems

The following theorems about it have been derived:

  1. Each eigenvalue $\lambda_n\in\mathbb{R}$
  2. $\lambda_1<\lambda_2<\cdots$
  3. $\lambda_n$ has eigenfunction $\phi_n(x)$, and for $a<x<b$, $\phi_n$ has $n-1$ zeros.
  4. The $\phi_n$ form a complete set, i.e., $$ f(x)\sim\sum_{n=1}^{\infty}a_n\phi_n, $$ where $f$ is piecewise smooth. Also, with properly chosen $a_n$, this converges to $[f(x^+)+f(x^-)]/2$ on $a<x<b$.
  5. If $\lambda_n\neq\lambda_m$,
  6. $$ \int_a^b\phi_n\phi_m\sigma\,dx=0. $$
The following is the Rayleigh quotient: $$ \lambda=\frac{-p\phi\,d\phi/dx|_a^b+\int_a^b[p(d\phi/dx)^2-q\phi^2]\,dx}{\int_a^b\phi^2\sigma\,dx}. $$ Most of these theorems can be proved using Green's formula $$ \int_a^b[uL(v)-vL(u)]\,dx=p\left(u\frac{dv}{dx}-v\frac{du}{dx}\right)\Bigg|_a^b, $$ where $$ L\equiv\frac d{dx}\left(p\frac d{dx}\right)+q. $$ The Rayleigh quotient proves that $\lambda\geq0$ if $-p\phi\,d\phi/dx|_a^b\geq0$ and $q\leq0$.

Enlace Cube Puzzle

My brother gave me this other puzzle (see the previous one here):

Which is the longest line?

I assume this is a square box.

To make things simpler, without loss of generality, let this be a unit box, that is, let its side have length one.

Observe how the lines take circular segment shapes on the faces of the box. So, the lines can be described in terms of a sum of quarter circles. Mathematically, $$ L=\frac\pi2\sum_{i=1}^{n}1=\frac{n\pi}2. $$ In other words, we don't even have to do any math at all; all we have to do is count the number of quarter circles and whichever line has the most is the longest one.

To me, it looks like the blue line has $8$ quarter circles while the red one has $7$. So, the blue line is the longest line.

By the way, bro:

Spoiler:
Mi hermano, han pasado tantas cosas este aƱo que tendrĆ­a que sentarme contigo por un dĆ­a entero para poder contarte mĆ”s o menos cĆ³mo me ha estado yendo. Ahorita la universidad me estĆ” medio matando con exĆ”menes finales, pero tengo planeado escribirte cuando los termine la prĆ³xima semana (¡esta vez sĆ­!). :)

Three Vessels

My brother gave me the following puzzle:

He didn't state the problem fully, so I looked it up online here. However, I did not look at the answer; I'm not like that. In fact, I won't look at the answer even after I answer it.

There are three vessels of different color: green, red and blue, with sizes as shown.

The green vessel is put into the red one, and the red one into the blue one.

Each vessel is full of water.

The object is to determine which vessel contains the largest quantity of water. The thickness of the walls of the vessels can be ignored.

At first, I didn't understand the problem because I was looking at it from a fully mathematical perspective (it seemed trivial). In other words, I was overlooking water displacement (the negligible wall thickness led me to think in terms of water superposition, lol).

Moreover, the feel of the puzzle seems to want to challenge intuition. So, perhaps the correct answer is that the green vessel contains the largest quantity of water, but we'll see.

With these things in mind, let's recall that the equation for the volume of a cylinder is as follows: $$ V=\pi r^2h. $$ Therefore, the blue, red and green cylinders have the following volumes, respectively: $$\begin{align} V_b&=48\pi,\\ V_r&=36\pi,\\ V_g&=20\pi. \end{align}$$ However, let's take a look at the blue cylinder; its water is being displaced by an internal cylinder of diameter $6$. Hence, its water volume is $$ \hat{V_b}=V_b-\pi3^3=21\pi. $$ Similarly, the red cylinder's water volume is $$ \hat{V_r}=V_r-\pi4^2=20\pi. $$ Finally, the green cylinder has no internal cylinder. So, it's water volume is the same as its volume; $\hat{V_g}=V_g$.

Our final results are as follows: $$\begin{align} \hat{V_b}&=21\pi,\\ \hat{V_r}&=20\pi,\\ \hat{V_g}&=20\pi. \end{align}$$ So, our counter-intuition was incorrect; the blue cylinder holds the largest amount of water.

Here is another puzzle.

A Recap of Everything

It has been a long time since I last posted an entry, and that is because we have been stuck on the subject of Fourier series for over two weeks.

We are still solving the heat equation and its steady-state counterparts defined on various kinds of regions, such as a rectangle, a disk, an annulus, and even a sphere defined in cylindrical coordinates.

Every single problem eventually boils down to an infinite series solution that has to be either approximated or computed for; the Fourier series that we are currently dealing with are fairly elementary and require simple techniques such as the ratio test to be evaluated.

Here is an example of this (the heat equation): $$\begin{align} \color{blue}{u_t}&\color{blue}{=ku_{xx}},\qquad0<x<L,\qquad t>0,\\ u(0,t)&=u(L,t)=0,\\ u(x,0)&=100. \end{align}$$ Let $u(x,t)=\phi(x)G(t)$. That is, we are assuming that it is separable. Then $$\begin{align} \phi G’=k\phi’’G,\\ \frac{\phi’’}{\phi}=\frac{G’}{kG}=-\lambda,\\ \phi’’=-\lambda\phi,\tag{1}\\ G’=-\lambda kG.\tag{2} \end{align}$$ The ODE $(1)$ has three cases. Let us also apply the boundary conditions to it:

If $\lambda<0$, then $$\begin{align} \phi(x)&=c_1\cosh\sqrt{-\lambda}x+c_2\sinh\sqrt{-\lambda}x,\\ \phi(0)&=c_1=0,\\ \phi(L)&=c_2\sinh\sqrt{-\lambda}L=0\longrightarrow c_2=0. \end{align}$$ If $\lambda=0$, then $$\begin{align} \phi(x)&=c_1x+c_2,\\ \phi(0)&=c_2=0,\\ \phi(L)&=c_1L=0\longrightarrow c_1=0. \end{align}$$ If $\lambda>0$, then $$\begin{align} \phi(x)&=c_1\sin\sqrt\lambda x+c_2\cos\sqrt\lambda x,\\ \phi(0)&=c_2=0,\\ \phi(L)&=c_1\sin\sqrt\lambda L=0\longrightarrow\lambda=\left(\frac{n\pi}L\right)^2,\qquad n\in\mathbb{N}. \end{align}$$ Therefore, $$ \phi_n(x)=B_n\sin\frac{n\pi x}L. $$ The ODE $(2)$ is simply $$ G_n(t)=C_n\exp\left[-k\left(\frac{n\pi}L\right)^2t\right]. $$ As a result, we have that $$ u_n(x,t)=A_n\sin\frac{n\pi x}L\exp\left[-k\left(\frac{n\pi}L\right)^2t\right], $$ which, by the superposition principle, gives us the general solution to the PDE $$ \color{blue}{u(x,t)=\sum_{n=1}^\infty A_n\sin\frac{n\pi x}L\exp\left[-k\left(\frac{n\pi}L\right)^2t\right].}\tag{3} $$ Applying the initial condition to $(3)$ yields $$ u(x,0)=\sum_{n=1}^\infty A_n\sin\frac{n\pi x}L=100, $$ and using the orthogonality of sine to solve for $A_n$ gives us $$ A_n=\frac{200}L\int_0^L\sin\frac{n\pi x}L\,dx=\left\{\begin{array}{ccc}0&n\text{ even},\\\frac{400}{n\pi}&n\text{ odd}.\end{array}\right. $$ Here is the plot of the function while taking $k=1$ and $L=1$:

 

It is impressive the amount of work and theory required to solve a very simple-looking PDE!

Note: The waviness at the top of the gif in its first frame is due to a poor numerical approximation of the infinite Fourier series.

Introduction to Fourier Series

The Fourier series of a function $f=f(t)$ with period $2L$ is defined by $$Sf(t):=a_0+\sum_{n=1}^{\infty}\left[a_n\cos\frac{n\pi t}{L}+b_n\sin\frac{n\pi t}{L}\right],$$ where $$\begin{align} a_0&:=\frac{1}{2L}\int_{-L}^{L}f(t)\,dt,\\ a_n&:=\frac{1}{L}\int_{-L}^{L}f(t)\cos\frac{n\pi t}{L}\,dt,\\ b_n&:=\frac{1}{L}\int_{-L}^{L}f(t)\sin\frac{n\pi t}{L}\,dt, \end{align}$$ and $f$ and $f'$ are piecewise continuous on $[-L,L]$.

How this conclusion was achieved will be elaborated upon later. For now, let us apply it to a couple functions.

Example 1: compute the Fourier series of $$ f(t):=\left\{\begin{matrix} 1,\qquad\text{for }0<t<\pi;\\ 0,\qquad\text{for }t=0\text{, }\pm\pi;\\ -1,\qquad\text{for }-\pi<t<0; \end{matrix}\right. $$ with $f(t)=f(t+2\pi)$ for all $t$.

Visually, this is what we have:

Note that both $f$ and $f'$ are continuous. Because of this, we can compute its Fourier series.

Also note that $L=\pi$.

To find $a_0$, observe that $f$ is an odd function. Hence, the integral $$ a_0=\frac1{2\pi}\int_{-\pi}^\pi f(t)\,dt=0, $$ because the areas "cancel out."

To find $a_n$, observe that the product of $f$ (odd) and $\cos$ (even) is an odd function. Therefore, as with $a_0$, $$ a_n=\frac1\pi\int_{-\pi}^\pi f(t)\cos nt\,dt=0. $$ Finally, to compute $b_n$, observe that the product of $f$ (odd) and $\sin$ (odd) is an even function. As a result, $$ b_n=\frac1\pi\int_{-\pi}^\pi f(t)\sin nt\,dt=\frac2\pi\int_0^\pi\sin nt\,dt=\frac2{n\pi}\left[-\cos nt\right]_0^\pi=-\frac2{n\pi}\left[\cos n\pi-1\right]=-\frac2{n\pi}\left[(-1)^n-1\right]. $$ This implies that $$ b_n=\left\{\begin{matrix} 0\qquad\text{if }n\text{ is even},\\ \frac4{n\pi}\qquad\text{if }n\text{ is odd}. \end{matrix}\right. $$ As a result, $Sf(t)$ boils down to $$ Sf(t)=\sum_{k=1}^\infty\frac4{(2k-1)\pi}\sin[(2k-1)t]. $$ To test this Fourier series' power, if we let the upper bound of summation be equal to just $10$, we will get the following approximation to our function $f$:

I am still trying to get the hang of Fourier series, specially when it comes down to computing the values of the coefficients' integrals based on the functions' products as well as their symmetries. Functional parity is something new to me although I have had the concept in mind for many years. When I get better at them, I will be able to approximate functions in terms of infinite sums! Hopefully this will simplify the study of nonhomogeneous PDEs.

Heat Equation with Zero-Temperature Boundaries (Continued)

A few entries ago, I concluded that the solution to the heat equation was$$u(x,t)=\sum_{n=1}^{\infty}B_n\sin\left(\frac{n\pi x}{L}\right)e^{-k(n\pi/L)^2t}.$$However, I didn't know how to determine what each $B_n$ is. Luckily, an idea claims that$$\int_0^L\sin\left(\frac{n\pi x}{L}\right)\sin\left(\frac{m\pi x}{L}\right)\,dx=\frac{L}{2},$$by orthogonality. Then, applying the initial condition $u(x,0)=f(x)$, I observe that$$f(x)=\sum_{n=1}^{\infty}B_n\sin\left(\frac{n\pi x}{L}\right).$$Finally, multiplying both sides by $\sin(m\pi x/L)$ and integrating from $0$ to $L$ gives$$B_m=\frac{2}{L}\int_0^Lf(x)\sin\left(\frac{m\pi x}{L}\right)\,dx.$$The solution to the heat equation is now complete. Let's then solve a real-life application:

Problem

Suppose that I have a metal rod of length $30$ centimeters with its lateral surface insulated. I heat it up to $100$ degrees Celsius uniformly and cool its ends down to $0$ degrees. What function models the rod's temperature distribution at $t=5$ seconds? Assume the rod to have thermal diffusivity $k=20\text{ cm}^2/s$ .

Solution

We have the following PDE:\begin{align}\frac{\partial u}{\partial t}&=20\frac{\partial^2u}{\partial x^2},\\u(0,t)&=0,\\u(30,t)&=0,\\u(x,0)&=100.\end{align}Then its solution has the following approximation:$$\begin{align}B_1&=\frac{20}{3}\int_0^{30}\sin\left(\frac{\pi x}{30}\right)\,dx=\frac{400}{\pi},\\u(x,5)&\approx\frac{400}{\pi}\sin\left(\frac{\pi x}{30}\right)e^{-1/9\pi^2}.\end{align}$$Letting $t$ be arbitrary, this is what we have visually:


3-D and Contour Views
The approximation to the solution is due to my current inability to compute Fourier series. Hopefully, in the next entry, I'll know how to compute the convergence of those infinite sums. Also, I can't wait to study both Einstein's and Schrƶdinger's equations.

The Boy Who Could Count

Eines Tages gab es einen guten Mathematiker namens Carl Friedrich GauƟ... I'm just kidding; this note won't be in German!


Legend says that there was a young German boy, perhaps ten to eleven years old (he was perhaps in the fourth or fifth grade), by the name of Carl Friedrich Gauss (1777 - 1855), who once added the numbers from $1$ to $100$ in no more than a few seconds. His teacher was a lazy man who enjoyed putting his students to work for long hours so that he could take a nap every once in a while. It is too bad for him that one of them so happened to be none other than the mathematical legend himself, the "prince of mathematicians"! The way that the boy solved this problem so quickly was by noticing a subtle pattern in the sum:

Adding all the numbers from $1$ to $100$ is a cumbersome task. I mean, really:$$\begin{align}1+2&=3,\\3+3&=6,\\6+4&=10,\\10+5&=15,\\\end{align}$$and so on. However, what young Friedrich did was add the numbers backward! Like this:$$\begin{align}1+100&=101,\\2+99&=101,\\3+98&=101,\\4+97&=101,\,\dots\end{align}$$He observed that all of them add up to $101$! As a result, all that he had to do was add this number $50$ times because:$$\dots\,50+51=101.$$This is the same thing as multiplying $50\times101$ to obtain $5,050$, which is the correct result.

That's it. That's all there is to it. Nevertheless, as mathematicians, we tend to be very fancy or "elegant". So, let's generalize this finding:

We want to add all the integers from $1$ to $n$. In other words, we want to know what$$\sum_{i=1}^{n}i=1+2+3+\cdots+n$$adds up to.

Straight from Gauss' problem, it's clear that he was adding to $100$ by multiplying $101$ by $50$. In other words, if we let $n=100$, then he just multiplied $n+1$ by $n\div2$. That's it! We just generalized the problem and can now claim that:$$\sum_{i=1}^{n}i=\frac{n(n+1)}{2}.$$From this expression, it follows that the sum of all the numbers from $1$ to $1,234,567$ is$$\frac{1,234,567(1,234,568)}{2}=762,078,456,028.$$I bet noticing this pattern saved our little fella some playground time.

Discovering Epic $LaTeX$ IDEs

So, today my adviser and I went over a few really good $\LaTeX$ IDEs and ended up choosing Texmaker as our default one for writing publishable, mathematical articles and journals.

I felt like typing one homework from my differential geometry class and this is what I ended up with (it's incomplete):

Differential Geometry Homework 2

I'm still pretty new to this, but hopefully I can end up putting really good-looking documents together (hopefully no one cheats off of this, lol).

Heat Equation with Zero-Temperature Boundaries

Homogeneous Linear ODEs with Constant Coefficients


As I was finalizing my study of the method of separation of variables, I ran into the inconsistency that I had forgotten how to solve homogeneous linear ODEs with constant coefficients, that is, equations of the form$$a\frac{d^2y}{dx^2}+b\frac{dy}{dx}+cy=0$$Basically, we perform the following lambda substitution:$$a\lambda^2+b\lambda+c=0$$and solve the resulting quadratic equation for $\lambda$:$$\lambda=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$There are three cases:

  • If $b^2-4ac>0$, then we have a solution of the form$$y=c_1e^{\lambda_1x}+c_2e^{\lambda_2x}$$
  • If  $b^2-4ac=0$, then we have a solution of the form$$y=c_1e^{\lambda x}+c_2xe^{\lambda x}$$
  • If  $b^2-4ac<0$, then we have a solution of the form$$y=e^{\alpha x}(c_1\cos\beta x+c_2\sin\beta x)$$

The last one is derived from Euler's formula $e^{i\theta}=\cos\theta+i\sin\theta$, and the solution is taken to have the form $\lambda=\alpha\pm i\beta$.

Example$$y''+3y'+2y=6$$To find the complementary function $y_c$, we perform a lambda substitution and solve:$$\lambda^2+3\lambda+2=0\\\lambda=-2\text{ or }-1\\y_c=c_1e^{-x}+c_2e^{-2x}$$The particular solution has the form $y_p=A$. Differentiating and substituting accordingly gives$$2A=6\Longrightarrow A=3$$Therefore, by the superposition principle, the general solution to the ODE is$$y=y_c+y_p=c_1e^{-x}+c_2e^{-2x}+3$$

The Method of Separation of Variables


We want to solve the PDE$$\frac{\partial u}{\partial t}=k\frac{\partial^2u}{\partial x^2}$$with initial and boundary conditions$$u(x,0)=f(x)\\u(0,t)=0\\u(L,t)=0$$To do that, let's assume the solution has the form$$u(x,t)=\varphi(x)G(t)$$Don't ask, just accept it. There's a reason behind picking such form, but it wouldn't help to explain it now. With that said, let's plug this solution into our PDE to obtain$$\varphi(x)\frac{dG}{dt}=kG(t)\frac{d^2\varphi}{dx^2}$$Now, let's separate variables!$$\frac{1}{kG(t)}\frac{dG}{dt}=\frac{1}{\varphi(x)}\frac{d^2\varphi}{dx^2}$$If you look at what we just obtained, you're eventually going to ask yourself: "how's it possible for a function in terms of $t$ to be equal to a function in terms of $x$? This makes no sense at all!" Well, of course it doesn't, unless they're both equal to the same constant! Let's call this constant $\lambda$:$$\frac{1}{kG(t)}\frac{dG}{dt}=\frac{1}{\varphi(x)}\frac{d^2\varphi}{dx^2}=-\lambda$$Again, the reason we separated our equation this way and the reason we put a negative sign before $\lambda$ are for the purpose of convenience only. It'll all make sense down the road, so stop scratching your head and keep reading! By the way, notice that our partial derivatives turned into normal ones because we're dealing with functions with one variable.

Now, let's solve the first ODE$$\frac{dG}{dt}=-\lambda kG(t)$$Our famous toolkit methods lead to the solution$$G(t)=ce^{-\lambda kt}$$With that out of the way, we can now solve the second ODE (it's a BVP)$$\frac{d^2\varphi}{dx^2}=-\lambda\varphi\\\varphi(0)=0\\\varphi(L)=0$$Performing a lambda substitution would be confusing, so let's call it an $m$ substitution. It'll yield$$m=\pm\sqrt{-\lambda}$$Again we have three cases:

  • If $\lambda>0$, then$$\varphi=c_1\cos\sqrt\lambda x+c_2\sin\sqrt\lambda x$$From the boundary conditions,$$\varphi(0)=0\Longrightarrow c_1=0\\\varphi(L)=0\Longrightarrow c_2\sin\sqrt{\lambda}L=0$$To avoid the trivial solution, we must have that$$\sin\sqrt\lambda L=0\Longrightarrow\lambda=\left(\frac{n\pi}{L}\right)^2$$where $n\in\mathbb{N}$. As a result, we're left with$$\varphi(x)=c_2\sin\frac{n\pi x}{L}$$
  • If $\lambda=0$, then$$\varphi=c_1+c_2x$$From the boundary conditions,$$\varphi(0)=0\Longrightarrow c_1=0\\\varphi(L)=0\Longrightarrow c_2=0$$Therefore, $\lambda=0$ is not an eigenvalue for this problem.

  • If $\lambda<0$, then$$\varphi=c_1e^{\sqrt sx}+c_2e^{-\sqrt sx}$$where $\lambda=-s$. Let's use the hyperbolic functions to represent this$$\varphi=c_3\cosh\sqrt sx+c_4\sinh\sqrt sx$$From the boundary conditions,$$\varphi(0)=0\Longrightarrow c_3=0\\\varphi(L)=0\Longrightarrow c_4\sinh\sqrt sL=0\Longrightarrow c_4=0$$because $\sqrt sL>0$ and $\sinh$ is never zero for a positive argument. Therefore, $\lambda<0$ is not an eigenvalue for this problem, either.

In summary, we obtained product solutions of the heat equation$$u(x,t)=B\sin\frac{n\pi x}{L}e^{-k(n\pi/L)^2t}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,n=1,2,3,\dots$$Finally, by the principle of superposition, we claim that the following infinite series is the solution of our heat conduction problem:$$u(x,t)=\sum_{n=1}^{\infty}B_n\sin\frac{n\pi x}{L}e^{-k(n\pi/L)^2t}$$Phew! I guess I could now do the same thing for the multidimensional case, lol.

Multidimensional Heat Equation

Before moving onto the method of separation of variables, let us enhance our one-dimensional heat equation to three dimensions by first introducing the gradient operator:$$\nabla=\frac{\partial}{\partial x}\hat i+\frac{\partial}{\partial y}\hat j+\frac{\partial}{\partial z}\hat k$$With that in mind, let us enhance the heat energy to encompass a three-dimensional, arbitrary subregion $R$:$$\iiint_Rc\rho u\,dV$$In three dimensions, the heat flux becomes a vector instead of a scalar 'direction' ($\varphi>0$, etc.). Because of this, our heat flux becomes$$\int\!\!\!\!\int_{\text{ }}\!\!\!\!\!\!\!\!\bigcirc\,\,\varphi\cdot\hat n\,dS$$where $\hat n$ is the unit normal vector to the region's surface.
It clearly follows that our multidimensional heat equation then turns into$$\frac{d}{dt}\iiint_Rc\rho u\,dV=-\!\int\!\!\!\!\int_{\text{ }}\!\!\!\!\!\!\!\!\bigcirc\,\,\varphi\cdot\hat n\,dS+\iiint_RQ\,dV$$But this is pretty hard to work with since the closed surface integral is expressed in terms of the scalar product of two vectors. However, there is this really cool theorem called the divergence theorem which, simply put, states that$$\iiint_R\nabla\cdot\vec A\,dV=\int\!\!\!\!\int_{\text{ }}\!\!\!\!\!\!\!\!\bigcirc\,\,\vec A\cdot\hat n\,dS$$We can use it to rewrite our heat equation above like this:$$\frac{d}{dt}\iiint_Rc\rho u\,dV=-\iiint_R\nabla\cdot\varphi\,dV+\iiint_RQ\,dV$$After manipulating this equation a bit, we end up with$$c\rho\frac{\partial u}{\partial t}=-\nabla\cdot\varphi+Q$$This is analogous to the one-dimensional heat equation we derived in the last entry! All we need now is the three-dimensional equivalent of Fourier's law of heat conduction.$$\varphi=-K_0\nabla u$$That was pretty straightforward. Now, substituting, treating $c$, $\rho$ and $K_0$ as constants and letting $Q=0$ yields our desired, three-dimensional heat equation$$\frac{\partial u}{\partial t}=k\nabla^2u$$where $\nabla^2$ is often called the Laplacian.

I had to get sidetracked there a little bit, but stay tuned for the next entry, where we will be discussing the method of separation of variables!

Some Insight on Differential Geometry

Suppose that you have a circle rolling on a flat surface without slipping and that it also has a dot on its edge. What path does the dot trace?

Tackling this question by means of the conventional, functional approach $y=f(x)$ is a tedious task. For that reason, we parameterize the function in terms of a convenient variable $t$ as such: $r(t)=g_1(t)\partial_x+g_2(t)\partial_y$. This is akin to a vector function.

Visually, this is the problem:


Taking the angle $t$ that the circle makes with its bottom-half vertical from its radius yields the following parameterization:$$r(t)=R\left[t+\cos\left(t-\frac{3\pi}{2}\right)\right]\partial_x-R\sin\left(t-\frac{3\pi}{2}\right) \partial_y.$$Plotting this parametric equation yields the following graph:

This is the path that the dot traces. Now, a good question could be, what exactly happens at the cusp? Also, what would happen if we slide the dot upward or downward from the circumference?

Derivation of the Heat Equation

ODEs vs PDEs


I began studying ODEs by solving equations straight off the bat. I started with the simplest one I could think of,$$\frac{dy}{dx}=0,$$and moved onto progressively more difficult ones, like$$\frac{dy}{dx}=x^2.$$While learning, I developed a 'toolkit' of techniques that could solve equations that met very specific criteria. For example, one 'tool' claimed that an equation of the form$$\frac{dy}{dx}+P(x)y=f(x)$$had a solution$$y=\frac{\int e^{\int P(x)\,dx}f(x)\,dx}{e^{\int P(x)\,dx}}.$$I picked up several other tools that could tackle every kind of ODE that you could imagine. However, there were never enough; the set of linear ODEs alone,$$\sum_{k=0}^{n}a_k(x)\frac{d^ky}{dx^k}=f(x),$$has an infinitesimal subset of equations that can be solved by analytic methods, and another one that requires a numerical approach instead. Moreover, when it comes to nonlinear ODEs, what lies before our eyes, more often than not, is a vast ocean full of esoteric sea creatures totally disjoint from all of our equation-solving findings, so to speak.

I thought that I could try an a posteriori approach toward PDEs, and although it could work, it would definitely never provide the necessary theoretical groundwork required to reach higher levels of understanding. To circumvent this issue, I decided to opt for a physically-based introduction to them, as mathematical legends such as Laplace found lots of inspiration from the observational works of Newton and similar applied scientists.

Now, without further due, let me present the heat equation to you!

 

The Heat Equation


The first PDE that we are going to study is called the heat equation. Let us derive it by considering a one-dimensional model rod of length $L$ with a perfectly insulated lateral surface. Visually:

  

The heat equation is governed by the word equation:$$\begin{matrix}\text{rate of change}\\\text{of heat energy}\\\text{in time}\end{matrix}=\begin{matrix}\text{heat energy flowing}\\\text{across boundaries}\\\text{per unit time}\end{matrix}+\begin{matrix}\text{heat energy generated}\\\text{inside per unit time.}\end{matrix}$$Therefore, let us define the following functions:$$\begin{align}e(x,t)&:=\text{thermal energy density}\\\varphi(x,t)&:=\begin{matrix}\text{heat flux (the amount of thermal energy per unit}\\\text{time flowing to the right per unit surface area)}\end{matrix}\\Q(x,t)&:=\text{heat energy per unit volume generated per unit time}\\\end{align}$$Our word equation then becomes$$\frac{\partial}{\partial t}\left[e(x,t)A\Delta x\right]\approx\varphi(x,t)A-\varphi(x+\Delta x,t)A+Q(x,t)A\Delta x$$Notice that this is only an approximation because we are taking a very small slice of the rod of size $\Delta x$ (the functions are deemed almost constant at this size). We claim, however, that it becomes exact as we let $\Delta x\to0$:$$\frac{\partial e}{\partial t}=\lim_{\Delta x\to0}\frac{\varphi(x,t)-\varphi(x+\Delta x,t)}{\Delta x}+Q(x,t)=-\frac{\partial\varphi}{\partial x}+Q(x,t)$$We usually talk about the temperature of an object rather than its thermal energy density. Therefore, let us introduce the functions$$\begin{align}c(x)&:=\text{specific heat}\\\rho(x)&:=\text{mass density}\\u(x,t)&:=\text{temperature}\end{align}$$and the equality$$e(x,t)=c(x)\rho(x)u(x,t)$$Substituting yields a new equation$$c(x)\rho(x)\frac{\partial u}{\partial t}=-\frac{\partial\varphi}{\partial x}+Q(x,t)$$This is the heat equation. However, we cannot solve it since it involves two unknown functions $u$ and $\varphi$. For this reason, we introduce Fourier's law of heat conduction:$$\varphi(x,t)=-K_0\frac{\partial u}{\partial x}$$Substituting it into the heat equation yields$$c(x)\rho(x)\frac{\partial u}{\partial t}=K_0\frac{\partial^2u}{\partial x^2}+Q(x,t)$$For the sake of simplicity, let us treat $c$, $\rho$, and $K_0$ as constants. Hence,$$\frac{\partial u}{\partial t}=k\frac{\partial^2u}{\partial x^2}+\frac{Q(x,t)}{c\rho}$$where $k=K_0/c\rho$. Letting $Q=0$ yields the canonical heat equation; this is what we are mostly going to work with from here on.

This is how the heat equation is derived. The equalities we presented are physical in nature, but since this is a mathematical blog, we are just going to accept them as true. In the next entry, we are going to go over the method of separation of variables to solve the heat equation with initial and boundary conditions$$\begin{align}\text{IC}&:u(x,0)=f(x)\\\text{BCs}&:\left\{\begin{matrix}u(0,t)&=0\\u(L,t)&=0\end{matrix}\right.\end{align}$$I will talk more about initial and boundary conditions in the next entry as well. Until then, have fun!

In the Beginning...

Hello!

When I first began studying ordinary differential equations (ODEs), I immediately fell in love with their partial counterparts. To me, it felt as though the study of ODEs was a simple set up to understanding partial differential equations (PDEs) sometime in the near future. As I got further into the subject of ODEs, however, I began to experience how difficult all the theoretical groundwork to solving them really was. It all got to a point where we were studying only certain, ideal cases of these equations because it so happened that the vast majority of them had either no analytical solutions or required some arcane numerical methods to solve. All throughout this journey, every once in a while, I would remember PDEs and cringe at the thought of having to solve one of those, but I really wanted to, nevertheless, especially after learning that they are a cutting-edge subject still in its theoretical infancy.

In my study of ODEs, I learned how to solve equations of the form$$\sum_{k=0}^{n}a_k(x)\frac{d^ky}{dx^k}=b(x)$$by means of countless methods akin to recipes in a cookbook. Perhaps this was due to the nature of the course seeking to instill the idea in the form of mathematical euphemisms, to put it that way. For example, to solve the ODE$$\frac{dy}{dx}=e^{3x+2y}$$we must:

(1) recognize that it is separable and separate$$\frac{dy}{dx}=(e^{3x})(e^{2y})\Longrightarrow e^{-2y}dy=e^{3x}dx$$(2) integrate and 'absorb' integrating constants$$\int e^{-2y}dy=\int e^{3x}dx\Longrightarrow-\frac{1}{2}e^{-2y}=\frac{1}{3}e^{3x}+C$$(3) solve for $y$$$y=-\frac{1}{2}\ln\left(-\frac{2}{3}e^{3x}+C\right)$$But how can we be sure that this method works? That is, how do we know that this is the solution? Is it unique? Or, in other words, could there be others?

An even more outrageous 'recipe' to solving an ODE is the idea of an integrating factor, which claims that an ODE of the form$$\frac{dy}{dx}+P(x)y=f(x)$$can be solved by multiplying everything by the integrating factor $e^{\int P(x)dx}$. For example, solve$$3\frac{dy}{dx}+12y=4.$$(1) get into standard form$$\frac{dy}{dx}+4y=\frac{4}{3}$$(2) find the integrating factor$$e^{\int4dx}=e^{4x}$$(3) multiply everything by it$$e^{4x}\frac{dy}{dx}+4e^{4x}y=\frac{4}{3}e^{4x}$$(4) wrap it all up by factoring, integrating and solving for $y$$$\begin{align}

\frac{d}{dx}\left[e^{4x}y\right]&=\frac{4}{3}e^{4x}\\

e^{4x}y&=\frac{1}{3}e^{4x}+C\\

y&=Ce^{-4x}+\frac{1}{3}

\end{align}$$What in the world? Who came up with that method, where is the theorem that supports it, and can it be applied to other differential equations?

Another thing that blew my mind was the Principle of Superposition, which claims that if an ODE has two solutions $y_1$ and $y_2$, then their sum $y_1+y_2$ is another solution. Why is that true!? It makes total sense when you look at it from a physical point of view (think the Doppler effect), but where is the mathematical groundwork?

There are many other methods or 'recipes' to solve ODEs, like substitutions, exact and Bernoulli forms, numerical methods, Laplace transforms and Fourier series, to name a few, but most of them were presented to us in the form of a toolkit obtained by rote memorization.

Now that I am studying PDEs, we began our journey by studying the convoluted Heat Equation. There are many things that I have accumulated about it thus far, but that will be the subject of another entry. All I will say right now is that my most dreaded nightmare has become true; PDEs are magnitudes harder!

In a good way, however, and profound enough that we just 'get' the main idea behind the inspiration of all the mathematical behemoths who pioneered their theoretical development. It is no longer a toolkit approach, but one based solely on maieutics channeled toward our deepest, analytic conscience. Perhaps back then I was not ready to undergo such rigorous treatment of this kind of equations, but now I feel just a tad more ready to give it one heck of a good try.

Stay tuned!