Introduction to Fourier Series

The Fourier series of a function $f=f(t)$ with period $2L$ is defined by $$Sf(t):=a_0+\sum_{n=1}^{\infty}\left[a_n\cos\frac{n\pi t}{L}+b_n\sin\frac{n\pi t}{L}\right],$$ where $$\begin{align} a_0&:=\frac{1}{2L}\int_{-L}^{L}f(t)\,dt,\\ a_n&:=\frac{1}{L}\int_{-L}^{L}f(t)\cos\frac{n\pi t}{L}\,dt,\\ b_n&:=\frac{1}{L}\int_{-L}^{L}f(t)\sin\frac{n\pi t}{L}\,dt, \end{align}$$ and $f$ and $f'$ are piecewise continuous on $[-L,L]$.

How this conclusion was achieved will be elaborated upon later. For now, let us apply it to a couple functions.

Example 1: compute the Fourier series of $$ f(t):=\left\{\begin{matrix} 1,\qquad\text{for }0<t<\pi;\\ 0,\qquad\text{for }t=0\text{, }\pm\pi;\\ -1,\qquad\text{for }-\pi<t<0; \end{matrix}\right. $$ with $f(t)=f(t+2\pi)$ for all $t$.

Visually, this is what we have:

Note that both $f$ and $f'$ are continuous. Because of this, we can compute its Fourier series.

Also note that $L=\pi$.

To find $a_0$, observe that $f$ is an odd function. Hence, the integral $$ a_0=\frac1{2\pi}\int_{-\pi}^\pi f(t)\,dt=0, $$ because the areas "cancel out."

To find $a_n$, observe that the product of $f$ (odd) and $\cos$ (even) is an odd function. Therefore, as with $a_0$, $$ a_n=\frac1\pi\int_{-\pi}^\pi f(t)\cos nt\,dt=0. $$ Finally, to compute $b_n$, observe that the product of $f$ (odd) and $\sin$ (odd) is an even function. As a result, $$ b_n=\frac1\pi\int_{-\pi}^\pi f(t)\sin nt\,dt=\frac2\pi\int_0^\pi\sin nt\,dt=\frac2{n\pi}\left[-\cos nt\right]_0^\pi=-\frac2{n\pi}\left[\cos n\pi-1\right]=-\frac2{n\pi}\left[(-1)^n-1\right]. $$ This implies that $$ b_n=\left\{\begin{matrix} 0\qquad\text{if }n\text{ is even},\\ \frac4{n\pi}\qquad\text{if }n\text{ is odd}. \end{matrix}\right. $$ As a result, $Sf(t)$ boils down to $$ Sf(t)=\sum_{k=1}^\infty\frac4{(2k-1)\pi}\sin[(2k-1)t]. $$ To test this Fourier series' power, if we let the upper bound of summation be equal to just $10$, we will get the following approximation to our function $f$:

I am still trying to get the hang of Fourier series, specially when it comes down to computing the values of the coefficients' integrals based on the functions' products as well as their symmetries. Functional parity is something new to me although I have had the concept in mind for many years. When I get better at them, I will be able to approximate functions in terms of infinite sums! Hopefully this will simplify the study of nonhomogeneous PDEs.

Heat Equation with Zero-Temperature Boundaries (Continued)

A few entries ago, I concluded that the solution to the heat equation was$$u(x,t)=\sum_{n=1}^{\infty}B_n\sin\left(\frac{n\pi x}{L}\right)e^{-k(n\pi/L)^2t}.$$However, I didn't know how to determine what each $B_n$ is. Luckily, an idea claims that$$\int_0^L\sin\left(\frac{n\pi x}{L}\right)\sin\left(\frac{m\pi x}{L}\right)\,dx=\frac{L}{2},$$by orthogonality. Then, applying the initial condition $u(x,0)=f(x)$, I observe that$$f(x)=\sum_{n=1}^{\infty}B_n\sin\left(\frac{n\pi x}{L}\right).$$Finally, multiplying both sides by $\sin(m\pi x/L)$ and integrating from $0$ to $L$ gives$$B_m=\frac{2}{L}\int_0^Lf(x)\sin\left(\frac{m\pi x}{L}\right)\,dx.$$The solution to the heat equation is now complete. Let's then solve a real-life application:

Problem

Suppose that I have a metal rod of length $30$ centimeters with its lateral surface insulated. I heat it up to $100$ degrees Celsius uniformly and cool its ends down to $0$ degrees. What function models the rod's temperature distribution at $t=5$ seconds? Assume the rod to have thermal diffusivity $k=20\text{ cm}^2/s$ .

Solution

We have the following PDE:\begin{align}\frac{\partial u}{\partial t}&=20\frac{\partial^2u}{\partial x^2},\\u(0,t)&=0,\\u(30,t)&=0,\\u(x,0)&=100.\end{align}Then its solution has the following approximation:$$\begin{align}B_1&=\frac{20}{3}\int_0^{30}\sin\left(\frac{\pi x}{30}\right)\,dx=\frac{400}{\pi},\\u(x,5)&\approx\frac{400}{\pi}\sin\left(\frac{\pi x}{30}\right)e^{-1/9\pi^2}.\end{align}$$Letting $t$ be arbitrary, this is what we have visually:


3-D and Contour Views
The approximation to the solution is due to my current inability to compute Fourier series. Hopefully, in the next entry, I'll know how to compute the convergence of those infinite sums. Also, I can't wait to study both Einstein's and Schrƶdinger's equations.

The Boy Who Could Count

Eines Tages gab es einen guten Mathematiker namens Carl Friedrich GauƟ... I'm just kidding; this note won't be in German!


Legend says that there was a young German boy, perhaps ten to eleven years old (he was perhaps in the fourth or fifth grade), by the name of Carl Friedrich Gauss (1777 - 1855), who once added the numbers from $1$ to $100$ in no more than a few seconds. His teacher was a lazy man who enjoyed putting his students to work for long hours so that he could take a nap every once in a while. It is too bad for him that one of them so happened to be none other than the mathematical legend himself, the "prince of mathematicians"! The way that the boy solved this problem so quickly was by noticing a subtle pattern in the sum:

Adding all the numbers from $1$ to $100$ is a cumbersome task. I mean, really:$$\begin{align}1+2&=3,\\3+3&=6,\\6+4&=10,\\10+5&=15,\\\end{align}$$and so on. However, what young Friedrich did was add the numbers backward! Like this:$$\begin{align}1+100&=101,\\2+99&=101,\\3+98&=101,\\4+97&=101,\,\dots\end{align}$$He observed that all of them add up to $101$! As a result, all that he had to do was add this number $50$ times because:$$\dots\,50+51=101.$$This is the same thing as multiplying $50\times101$ to obtain $5,050$, which is the correct result.

That's it. That's all there is to it. Nevertheless, as mathematicians, we tend to be very fancy or "elegant". So, let's generalize this finding:

We want to add all the integers from $1$ to $n$. In other words, we want to know what$$\sum_{i=1}^{n}i=1+2+3+\cdots+n$$adds up to.

Straight from Gauss' problem, it's clear that he was adding to $100$ by multiplying $101$ by $50$. In other words, if we let $n=100$, then he just multiplied $n+1$ by $n\div2$. That's it! We just generalized the problem and can now claim that:$$\sum_{i=1}^{n}i=\frac{n(n+1)}{2}.$$From this expression, it follows that the sum of all the numbers from $1$ to $1,234,567$ is$$\frac{1,234,567(1,234,568)}{2}=762,078,456,028.$$I bet noticing this pattern saved our little fella some playground time.

Discovering Epic $LaTeX$ IDEs

So, today my adviser and I went over a few really good $\LaTeX$ IDEs and ended up choosing Texmaker as our default one for writing publishable, mathematical articles and journals.

I felt like typing one homework from my differential geometry class and this is what I ended up with (it's incomplete):

Differential Geometry Homework 2

I'm still pretty new to this, but hopefully I can end up putting really good-looking documents together (hopefully no one cheats off of this, lol).

Heat Equation with Zero-Temperature Boundaries

Homogeneous Linear ODEs with Constant Coefficients


As I was finalizing my study of the method of separation of variables, I ran into the inconsistency that I had forgotten how to solve homogeneous linear ODEs with constant coefficients, that is, equations of the form$$a\frac{d^2y}{dx^2}+b\frac{dy}{dx}+cy=0$$Basically, we perform the following lambda substitution:$$a\lambda^2+b\lambda+c=0$$and solve the resulting quadratic equation for $\lambda$:$$\lambda=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$There are three cases:

  • If $b^2-4ac>0$, then we have a solution of the form$$y=c_1e^{\lambda_1x}+c_2e^{\lambda_2x}$$
  • If  $b^2-4ac=0$, then we have a solution of the form$$y=c_1e^{\lambda x}+c_2xe^{\lambda x}$$
  • If  $b^2-4ac<0$, then we have a solution of the form$$y=e^{\alpha x}(c_1\cos\beta x+c_2\sin\beta x)$$

The last one is derived from Euler's formula $e^{i\theta}=\cos\theta+i\sin\theta$, and the solution is taken to have the form $\lambda=\alpha\pm i\beta$.

Example$$y''+3y'+2y=6$$To find the complementary function $y_c$, we perform a lambda substitution and solve:$$\lambda^2+3\lambda+2=0\\\lambda=-2\text{ or }-1\\y_c=c_1e^{-x}+c_2e^{-2x}$$The particular solution has the form $y_p=A$. Differentiating and substituting accordingly gives$$2A=6\Longrightarrow A=3$$Therefore, by the superposition principle, the general solution to the ODE is$$y=y_c+y_p=c_1e^{-x}+c_2e^{-2x}+3$$

The Method of Separation of Variables


We want to solve the PDE$$\frac{\partial u}{\partial t}=k\frac{\partial^2u}{\partial x^2}$$with initial and boundary conditions$$u(x,0)=f(x)\\u(0,t)=0\\u(L,t)=0$$To do that, let's assume the solution has the form$$u(x,t)=\varphi(x)G(t)$$Don't ask, just accept it. There's a reason behind picking such form, but it wouldn't help to explain it now. With that said, let's plug this solution into our PDE to obtain$$\varphi(x)\frac{dG}{dt}=kG(t)\frac{d^2\varphi}{dx^2}$$Now, let's separate variables!$$\frac{1}{kG(t)}\frac{dG}{dt}=\frac{1}{\varphi(x)}\frac{d^2\varphi}{dx^2}$$If you look at what we just obtained, you're eventually going to ask yourself: "how's it possible for a function in terms of $t$ to be equal to a function in terms of $x$? This makes no sense at all!" Well, of course it doesn't, unless they're both equal to the same constant! Let's call this constant $\lambda$:$$\frac{1}{kG(t)}\frac{dG}{dt}=\frac{1}{\varphi(x)}\frac{d^2\varphi}{dx^2}=-\lambda$$Again, the reason we separated our equation this way and the reason we put a negative sign before $\lambda$ are for the purpose of convenience only. It'll all make sense down the road, so stop scratching your head and keep reading! By the way, notice that our partial derivatives turned into normal ones because we're dealing with functions with one variable.

Now, let's solve the first ODE$$\frac{dG}{dt}=-\lambda kG(t)$$Our famous toolkit methods lead to the solution$$G(t)=ce^{-\lambda kt}$$With that out of the way, we can now solve the second ODE (it's a BVP)$$\frac{d^2\varphi}{dx^2}=-\lambda\varphi\\\varphi(0)=0\\\varphi(L)=0$$Performing a lambda substitution would be confusing, so let's call it an $m$ substitution. It'll yield$$m=\pm\sqrt{-\lambda}$$Again we have three cases:

  • If $\lambda>0$, then$$\varphi=c_1\cos\sqrt\lambda x+c_2\sin\sqrt\lambda x$$From the boundary conditions,$$\varphi(0)=0\Longrightarrow c_1=0\\\varphi(L)=0\Longrightarrow c_2\sin\sqrt{\lambda}L=0$$To avoid the trivial solution, we must have that$$\sin\sqrt\lambda L=0\Longrightarrow\lambda=\left(\frac{n\pi}{L}\right)^2$$where $n\in\mathbb{N}$. As a result, we're left with$$\varphi(x)=c_2\sin\frac{n\pi x}{L}$$
  • If $\lambda=0$, then$$\varphi=c_1+c_2x$$From the boundary conditions,$$\varphi(0)=0\Longrightarrow c_1=0\\\varphi(L)=0\Longrightarrow c_2=0$$Therefore, $\lambda=0$ is not an eigenvalue for this problem.

  • If $\lambda<0$, then$$\varphi=c_1e^{\sqrt sx}+c_2e^{-\sqrt sx}$$where $\lambda=-s$. Let's use the hyperbolic functions to represent this$$\varphi=c_3\cosh\sqrt sx+c_4\sinh\sqrt sx$$From the boundary conditions,$$\varphi(0)=0\Longrightarrow c_3=0\\\varphi(L)=0\Longrightarrow c_4\sinh\sqrt sL=0\Longrightarrow c_4=0$$because $\sqrt sL>0$ and $\sinh$ is never zero for a positive argument. Therefore, $\lambda<0$ is not an eigenvalue for this problem, either.

In summary, we obtained product solutions of the heat equation$$u(x,t)=B\sin\frac{n\pi x}{L}e^{-k(n\pi/L)^2t}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,n=1,2,3,\dots$$Finally, by the principle of superposition, we claim that the following infinite series is the solution of our heat conduction problem:$$u(x,t)=\sum_{n=1}^{\infty}B_n\sin\frac{n\pi x}{L}e^{-k(n\pi/L)^2t}$$Phew! I guess I could now do the same thing for the multidimensional case, lol.

Multidimensional Heat Equation

Before moving onto the method of separation of variables, let us enhance our one-dimensional heat equation to three dimensions by first introducing the gradient operator:$$\nabla=\frac{\partial}{\partial x}\hat i+\frac{\partial}{\partial y}\hat j+\frac{\partial}{\partial z}\hat k$$With that in mind, let us enhance the heat energy to encompass a three-dimensional, arbitrary subregion $R$:$$\iiint_Rc\rho u\,dV$$In three dimensions, the heat flux becomes a vector instead of a scalar 'direction' ($\varphi>0$, etc.). Because of this, our heat flux becomes$$\int\!\!\!\!\int_{\text{ }}\!\!\!\!\!\!\!\!\bigcirc\,\,\varphi\cdot\hat n\,dS$$where $\hat n$ is the unit normal vector to the region's surface.
It clearly follows that our multidimensional heat equation then turns into$$\frac{d}{dt}\iiint_Rc\rho u\,dV=-\!\int\!\!\!\!\int_{\text{ }}\!\!\!\!\!\!\!\!\bigcirc\,\,\varphi\cdot\hat n\,dS+\iiint_RQ\,dV$$But this is pretty hard to work with since the closed surface integral is expressed in terms of the scalar product of two vectors. However, there is this really cool theorem called the divergence theorem which, simply put, states that$$\iiint_R\nabla\cdot\vec A\,dV=\int\!\!\!\!\int_{\text{ }}\!\!\!\!\!\!\!\!\bigcirc\,\,\vec A\cdot\hat n\,dS$$We can use it to rewrite our heat equation above like this:$$\frac{d}{dt}\iiint_Rc\rho u\,dV=-\iiint_R\nabla\cdot\varphi\,dV+\iiint_RQ\,dV$$After manipulating this equation a bit, we end up with$$c\rho\frac{\partial u}{\partial t}=-\nabla\cdot\varphi+Q$$This is analogous to the one-dimensional heat equation we derived in the last entry! All we need now is the three-dimensional equivalent of Fourier's law of heat conduction.$$\varphi=-K_0\nabla u$$That was pretty straightforward. Now, substituting, treating $c$, $\rho$ and $K_0$ as constants and letting $Q=0$ yields our desired, three-dimensional heat equation$$\frac{\partial u}{\partial t}=k\nabla^2u$$where $\nabla^2$ is often called the Laplacian.

I had to get sidetracked there a little bit, but stay tuned for the next entry, where we will be discussing the method of separation of variables!

Some Insight on Differential Geometry

Suppose that you have a circle rolling on a flat surface without slipping and that it also has a dot on its edge. What path does the dot trace?

Tackling this question by means of the conventional, functional approach $y=f(x)$ is a tedious task. For that reason, we parameterize the function in terms of a convenient variable $t$ as such: $r(t)=g_1(t)\partial_x+g_2(t)\partial_y$. This is akin to a vector function.

Visually, this is the problem:


Taking the angle $t$ that the circle makes with its bottom-half vertical from its radius yields the following parameterization:$$r(t)=R\left[t+\cos\left(t-\frac{3\pi}{2}\right)\right]\partial_x-R\sin\left(t-\frac{3\pi}{2}\right) \partial_y.$$Plotting this parametric equation yields the following graph:

This is the path that the dot traces. Now, a good question could be, what exactly happens at the cusp? Also, what would happen if we slide the dot upward or downward from the circumference?

Derivation of the Heat Equation

ODEs vs PDEs


I began studying ODEs by solving equations straight off the bat. I started with the simplest one I could think of,$$\frac{dy}{dx}=0,$$and moved onto progressively more difficult ones, like$$\frac{dy}{dx}=x^2.$$While learning, I developed a 'toolkit' of techniques that could solve equations that met very specific criteria. For example, one 'tool' claimed that an equation of the form$$\frac{dy}{dx}+P(x)y=f(x)$$had a solution$$y=\frac{\int e^{\int P(x)\,dx}f(x)\,dx}{e^{\int P(x)\,dx}}.$$I picked up several other tools that could tackle every kind of ODE that you could imagine. However, there were never enough; the set of linear ODEs alone,$$\sum_{k=0}^{n}a_k(x)\frac{d^ky}{dx^k}=f(x),$$has an infinitesimal subset of equations that can be solved by analytic methods, and another one that requires a numerical approach instead. Moreover, when it comes to nonlinear ODEs, what lies before our eyes, more often than not, is a vast ocean full of esoteric sea creatures totally disjoint from all of our equation-solving findings, so to speak.

I thought that I could try an a posteriori approach toward PDEs, and although it could work, it would definitely never provide the necessary theoretical groundwork required to reach higher levels of understanding. To circumvent this issue, I decided to opt for a physically-based introduction to them, as mathematical legends such as Laplace found lots of inspiration from the observational works of Newton and similar applied scientists.

Now, without further due, let me present the heat equation to you!

 

The Heat Equation


The first PDE that we are going to study is called the heat equation. Let us derive it by considering a one-dimensional model rod of length $L$ with a perfectly insulated lateral surface. Visually:

  

The heat equation is governed by the word equation:$$\begin{matrix}\text{rate of change}\\\text{of heat energy}\\\text{in time}\end{matrix}=\begin{matrix}\text{heat energy flowing}\\\text{across boundaries}\\\text{per unit time}\end{matrix}+\begin{matrix}\text{heat energy generated}\\\text{inside per unit time.}\end{matrix}$$Therefore, let us define the following functions:$$\begin{align}e(x,t)&:=\text{thermal energy density}\\\varphi(x,t)&:=\begin{matrix}\text{heat flux (the amount of thermal energy per unit}\\\text{time flowing to the right per unit surface area)}\end{matrix}\\Q(x,t)&:=\text{heat energy per unit volume generated per unit time}\\\end{align}$$Our word equation then becomes$$\frac{\partial}{\partial t}\left[e(x,t)A\Delta x\right]\approx\varphi(x,t)A-\varphi(x+\Delta x,t)A+Q(x,t)A\Delta x$$Notice that this is only an approximation because we are taking a very small slice of the rod of size $\Delta x$ (the functions are deemed almost constant at this size). We claim, however, that it becomes exact as we let $\Delta x\to0$:$$\frac{\partial e}{\partial t}=\lim_{\Delta x\to0}\frac{\varphi(x,t)-\varphi(x+\Delta x,t)}{\Delta x}+Q(x,t)=-\frac{\partial\varphi}{\partial x}+Q(x,t)$$We usually talk about the temperature of an object rather than its thermal energy density. Therefore, let us introduce the functions$$\begin{align}c(x)&:=\text{specific heat}\\\rho(x)&:=\text{mass density}\\u(x,t)&:=\text{temperature}\end{align}$$and the equality$$e(x,t)=c(x)\rho(x)u(x,t)$$Substituting yields a new equation$$c(x)\rho(x)\frac{\partial u}{\partial t}=-\frac{\partial\varphi}{\partial x}+Q(x,t)$$This is the heat equation. However, we cannot solve it since it involves two unknown functions $u$ and $\varphi$. For this reason, we introduce Fourier's law of heat conduction:$$\varphi(x,t)=-K_0\frac{\partial u}{\partial x}$$Substituting it into the heat equation yields$$c(x)\rho(x)\frac{\partial u}{\partial t}=K_0\frac{\partial^2u}{\partial x^2}+Q(x,t)$$For the sake of simplicity, let us treat $c$, $\rho$, and $K_0$ as constants. Hence,$$\frac{\partial u}{\partial t}=k\frac{\partial^2u}{\partial x^2}+\frac{Q(x,t)}{c\rho}$$where $k=K_0/c\rho$. Letting $Q=0$ yields the canonical heat equation; this is what we are mostly going to work with from here on.

This is how the heat equation is derived. The equalities we presented are physical in nature, but since this is a mathematical blog, we are just going to accept them as true. In the next entry, we are going to go over the method of separation of variables to solve the heat equation with initial and boundary conditions$$\begin{align}\text{IC}&:u(x,0)=f(x)\\\text{BCs}&:\left\{\begin{matrix}u(0,t)&=0\\u(L,t)&=0\end{matrix}\right.\end{align}$$I will talk more about initial and boundary conditions in the next entry as well. Until then, have fun!

In the Beginning...

Hello!

When I first began studying ordinary differential equations (ODEs), I immediately fell in love with their partial counterparts. To me, it felt as though the study of ODEs was a simple set up to understanding partial differential equations (PDEs) sometime in the near future. As I got further into the subject of ODEs, however, I began to experience how difficult all the theoretical groundwork to solving them really was. It all got to a point where we were studying only certain, ideal cases of these equations because it so happened that the vast majority of them had either no analytical solutions or required some arcane numerical methods to solve. All throughout this journey, every once in a while, I would remember PDEs and cringe at the thought of having to solve one of those, but I really wanted to, nevertheless, especially after learning that they are a cutting-edge subject still in its theoretical infancy.

In my study of ODEs, I learned how to solve equations of the form$$\sum_{k=0}^{n}a_k(x)\frac{d^ky}{dx^k}=b(x)$$by means of countless methods akin to recipes in a cookbook. Perhaps this was due to the nature of the course seeking to instill the idea in the form of mathematical euphemisms, to put it that way. For example, to solve the ODE$$\frac{dy}{dx}=e^{3x+2y}$$we must:

(1) recognize that it is separable and separate$$\frac{dy}{dx}=(e^{3x})(e^{2y})\Longrightarrow e^{-2y}dy=e^{3x}dx$$(2) integrate and 'absorb' integrating constants$$\int e^{-2y}dy=\int e^{3x}dx\Longrightarrow-\frac{1}{2}e^{-2y}=\frac{1}{3}e^{3x}+C$$(3) solve for $y$$$y=-\frac{1}{2}\ln\left(-\frac{2}{3}e^{3x}+C\right)$$But how can we be sure that this method works? That is, how do we know that this is the solution? Is it unique? Or, in other words, could there be others?

An even more outrageous 'recipe' to solving an ODE is the idea of an integrating factor, which claims that an ODE of the form$$\frac{dy}{dx}+P(x)y=f(x)$$can be solved by multiplying everything by the integrating factor $e^{\int P(x)dx}$. For example, solve$$3\frac{dy}{dx}+12y=4.$$(1) get into standard form$$\frac{dy}{dx}+4y=\frac{4}{3}$$(2) find the integrating factor$$e^{\int4dx}=e^{4x}$$(3) multiply everything by it$$e^{4x}\frac{dy}{dx}+4e^{4x}y=\frac{4}{3}e^{4x}$$(4) wrap it all up by factoring, integrating and solving for $y$$$\begin{align}

\frac{d}{dx}\left[e^{4x}y\right]&=\frac{4}{3}e^{4x}\\

e^{4x}y&=\frac{1}{3}e^{4x}+C\\

y&=Ce^{-4x}+\frac{1}{3}

\end{align}$$What in the world? Who came up with that method, where is the theorem that supports it, and can it be applied to other differential equations?

Another thing that blew my mind was the Principle of Superposition, which claims that if an ODE has two solutions $y_1$ and $y_2$, then their sum $y_1+y_2$ is another solution. Why is that true!? It makes total sense when you look at it from a physical point of view (think the Doppler effect), but where is the mathematical groundwork?

There are many other methods or 'recipes' to solve ODEs, like substitutions, exact and Bernoulli forms, numerical methods, Laplace transforms and Fourier series, to name a few, but most of them were presented to us in the form of a toolkit obtained by rote memorization.

Now that I am studying PDEs, we began our journey by studying the convoluted Heat Equation. There are many things that I have accumulated about it thus far, but that will be the subject of another entry. All I will say right now is that my most dreaded nightmare has become true; PDEs are magnitudes harder!

In a good way, however, and profound enough that we just 'get' the main idea behind the inspiration of all the mathematical behemoths who pioneered their theoretical development. It is no longer a toolkit approach, but one based solely on maieutics channeled toward our deepest, analytic conscience. Perhaps back then I was not ready to undergo such rigorous treatment of this kind of equations, but now I feel just a tad more ready to give it one heck of a good try.

Stay tuned!