# Spectral method to solve parabolic equation

# Introduction and problem definition

The objective of this article is to present the application of the spectral method to solve parabolic partial differential equations (PDE). The theoretical resolution will be done jointly with a practical study of a typical problem.

Let $a\frac{\partial^2 u}{\partial x^2} + b\frac{\partial^2 u}{\partial xy} + c\frac{\partial^2 u}{\partial y^2} + d\frac{\partial u}{\partial x} + e\frac{\partial u}{\partial y} + fu = g$ a second oder linear partial differential equations with $a,b,c,d,e,f$ constants. Such equation is said to be parabolic when $b^2-4ac=0$ and elliptic, respectively hyperbolic, when $b^2-4ac<0$, respectively $b^2-4ac>0$.

The boundary value problem that we will study is

$$ (P)\begin{cases} \frac{\partial u}{\partial t} - \Delta u = f(x,y,t)\in L^2(\Omega ), \Omega= ]0,1[\times]0,1[\times ]0,T[\cr u(x,0,t)=\frac{x^2}{2}\cr u(x,1,t)=\frac{-(x-1)^2}{2}e^{-t}\cr \frac{\partial u}{\partial n}(0,y,t) = -ye^{-t}\cr \frac{\partial u}{\partial n}(1,y,t) = 1-y\cr u(x,y,0) = \sin(2\pi y)-x^2y+xy-\frac{y}{2}+\frac{x^2}{2} \end{cases} $$

The problem models the heat diffusion over time in a squared homogeneous material, heated via a given heat source $f$ of finite energy. The boundary conditions are of Dirichlet type on vertical sides and Neumann on the horizontal sides.

Finally, the initial condition gives us information on the initial heat distribution on the domain.

The solution is broken down into four steps:

- Homogenizing the problem, i.e. transforming the initial formulation such that the boundary conditions are equal to $0$,
- Solving the Spectral problem associated to the Laplacian operator, i.e. finding its eigenvalues and eigenfunctions,
- Rewriting the homogenized problem into the spectral basis
- Solving the Ordinary Differential Equation that appeared with the previous step.

## Homogenization via Ruled Surface

The boundary conditions are not homogeneous such that we first need to transform the original problem $(P)$ into an homogeneous one. For that, we define $w(x,y,t) = \phi(x,y) + u(x,y,t)$. By looking at the boundary conditions, we can deduce that we are looking for a ruled surface $\phi(x,y)$ such that $(P)$ is homogeneous.

### The Ruled Surface

We are looking $\phi(x,y) = g_1(x) + yg_2(y)$.

Using the initial condition on the boundaries, we obtain:

$$\begin{cases}w(x,0,t) = \phi(x,0) + u(x,0,t) = 0 \Rightarrow \boxed{g_1(x) = -\frac{x^2}{2}}\\ w(x,1,t) = \phi(x,1) + u(x,1,t) = 0 \Rightarrow \boxed{g_2(x) = (x^2 - x + \frac{1}{2})e^{-t}}\end{cases}$$

Therefore, the function $\phi$ that we are looking for is as follows:

$$\boxed{\phi(x,y) = -\frac{x^2}{2} + y(x^2 - x + \frac{1}{2})e^{-t}}$$

### Writing the homogeneous problem

We have $w(x,y,t) = \phi(x,y) + u(x,y,t)$, such that, by linearity of the Laplacian operator, we obtain $- \Delta u(x,y,t) = \Delta \phi(x,y) - \Delta w(x,y,t)$.

Similarly, $\frac{\delta u}{\delta t}(x,y,t) = \frac{\delta w}{\delta t}(x,y,t) - \frac{\delta \phi}{\delta t}(x,y,t)$

Therefore, $\frac{\delta w}{\delta t}(x,y,t) - \Delta w(x,y,t) = f(x,y,t) + \Delta \phi(x,y,t) - \frac{\delta \phi}{\delta t}(x,y,t)$.

By definition of $\phi$, we have:

$$\frac{\delta u}{\delta t}(x,y,t) - \Delta \phi = 1-y(1+e^{-t}(x^2-x+\frac{3}{2}))$$

which allows to write the following equivalence:

$$(P) \Leftrightarrow (P_h)\begin{cases} \frac{\partial w}{\partial t} - \Delta w = f(x,y,t) + 1-y(1+e^{-t}(x^2-x+\frac{3}{2})), f\in L^2(\Omega ), \Omega= ]0,1[\times]0,1[\times ]0,T[\cr w(x,0,t)=0\cr w(x,1,t)=0\cr \frac{\partial w}{\partial n}(0,y,t) = 0\cr \frac{\partial w}{\partial n}(1,y,t) = 0\cr w(x,y,0) = \sin(2\pi y)-x^2y+xy-\frac{y}{2}+\frac{x^2}{2} \end{cases}$$

## Solving the spectral problem associated to $(P_h)$

At first, we will focus only on the spatial part of the problem. The Laplacian being a compact linear operator, we are looking for its eignvalues and eigenfunctions, $\lambda_k$ and $w_k$, solutions to the equation:

$$-\Delta w = \lambda w~~(PS)$$

This solution exists everywhere in $L^2(\Omega)$ thanks to the following spectral decomposition theorem:

**Theorem:**

Let $V$ and $H$ be two real Hilbert spaces of finite dimension. We assume that $V \subset H$ with compact injection, i.e. the inclusion operator is continuous and compact; and that $V$ is dense in $H$. Let $a(.,.)$ be a bilinear symetric continuous form and $V$-elliptic. Then, the eigenvalues of $\forall v\in V,~~~ a(u,v) = \lambda<u,v>_H$ creates an increasing real-valued sequence $(\lambda_k)_{k \geq 1}$ that goes to infinity, and there exist an Hilbert basis of associated eigenvalues defined by:

$$u_k \in V, {\text{ et }} a(u_k,v) = \lambda_k<u_k,v_k>_H, ~~~ \forall v \in V$$

Furthermore, $(\frac{u_k}{\sqrt{\lambda_k}})_{k \geq 1}$ is also an Hilbert basis of the space $V$ with the inner product $a(.,.)$ (and not its canonical inner product).

The proof of this theorem is quite long and uses the theorem of spectral decomposition of a compact operator in a Hilbert space. This is also the reason why we need the compactness of the inclusion operator and its continuity. Details can be found in [1].

To take into account the Dirichlet conditions on an edge, we choose $V=H_0^1(\Omega)$ and $H=L^2(\Omega)$. A variational approach applied to $-\Delta w = \lambda w$, naturally leads to finding $a(u,v) = \int_{\Omega} \nabla u . \nabla v dx = \lambda \int_{\Omega} uv dx= \lambda <u,v>_{L^2(\Omega)}$. According to Rellich's theorem, $H_0^1(\Omega)$ is indeed compactly included in $L^2(\Omega)$. As the space of test functions $\cal{D}(\Omega)$ is dense in both spaces, we deduce that $H_0^1(\Omega)$ is dense in $L^2(\Omega)$ and we can apply the spectral theorem above, demonstrating the existence of a solution to the eigenvalue problem.

To return to the original problem, a Green formula suffices, taking into account the boundary conditions and using the trace theorem to conclude. We can more simply go through a weak form via the distributions, having already verified the compact and continuous injection:

$$\begin{matrix}~ \int_{\Omega} \nabla u . \nabla v dx = \lambda \int_{\Omega} uv dx\cr \Leftrightarrow < \nabla u , \nabla v > = \lambda < u, v>\cr \Leftrightarrow -< \Delta u , v > = \lambda < u, v>\end{matrix}$$

Hence $-\Delta u = \lambda u$ in the sense of distributions. From the density of $\cal D$ in $H_0^1$ and $L^2$ and by compact canonical injection, we deduce the equivalence in the sense of $L^2$.

Now that we have been convinced of the existence of the solutions of $(PS)$, we need to explain these solutions. To do this, we will consider a separation of variables by setting: $w(x,t)=X(x)Y(y)$.

From the conditions at the edge we can deduce that:

$$\begin{cases}w(x,0) = 0 = X(x)Y(0) \Rightarrow \boxed{Y(0)=0}\\ w(x,1) = 0 = X(x)Y(1)\Rightarrow \boxed{Y(1)=0}\end{cases}$$

$$\frac{\delta w}{\delta x}(0,y) = X_(x)Y(y) \Rightarrow \boxed{\begin{cases} X'(0)=0\\X'(1)=0\end{cases}}$$

Therefore:

$$\begin{matrix}~ (PS) & \Leftrightarrow & -X"(x)Y(y) - X(x)Y"(y) = \lambda X(x)Y(y) \cr & \Leftrightarrow &-\frac{X"(x)}{X(x)} - \frac{Y"(y)}{Y(y)} = \lambda \cr &\Rightarrow & \begin{cases} - \frac{X"(x)}{X(x)}=\alpha~~~(*)\cr - \frac{Y"(y)}{Y(y)}=\beta~~~(**)\cr \alpha+\beta = \lambda \end{cases} \end{matrix}$$

The solution poses no issue since $(*)$ and $(**)$ are second order linear differential equations with constant coefficients. A simple analysis of the characteristic equation gives us the form of the solution and the conditions on $X'$ and $Y$ deduced from the conditions at the edge of the homogeneous problem make it possible to eliminate solutions which are identically zero and therefore not interesting nor from a mathematical point of view and even less from a physical point of view.

### Solving $(*)$

$$- \frac{X"(x)}{X(x)}=\alpha \Rightarrow -X"(x) - \alpha X(x) = 0$$

From the characteristic equation we find:

$$ \Delta = -4\alpha$$

We then distinguish 3 cases:

#### $\alpha > 0$

Thus $\Delta < 0$ and $X$ is of the form $X(x) = A_x \cos(\sqrt \alpha x) + B_x \sin(\sqrt \alpha x)$.

We deduce that $X'(x) = -\sqrt \alpha A_x \sin(\sqrt \alpha x) + \sqrt \alpha B_x \cos(\sqrt \alpha x)$.

From the boundary conditions we deduce that:

$$\begin{cases}X'(0) = \sqrt \alpha B_x = 0 \Rightarrow B_x = 0\\ X'(1) = -\sqrt \alpha A_x \sin(\sqrt \alpha) = 0 \Rightarrow \sin(\sqrt \alpha) = 0\end{cases}$$

We deduce that $\alpha_k = k^2\pi^2, \forall k>0$ and therefore $ X_k(x) = A_x\cos(k\pi x), \forall k>0$.

#### $\alpha = 0$

So $\Delta = 0$ and $X$ is of the form $X(x) = A_xx + B_x$. We deduce that $X'(x) = -\sqrt \alpha A_x \sin(\sqrt \alpha x) + \sqrt \alpha B_x \cos(\sqrt \alpha x)$.

From the boundary conditions we deduce that:

$$X'(0) = A_x = 0$$

So $X(x) = B_x$, constant solution.

#### $\alpha < 0$

So $\Delta < 0$ and $X$ is of the form $X(x) = A_xe^{-\sqrt(-\alpha)x} + B_xe^{\sqrt(-\alpha)x}$.

From the boundary conditions we deduce that $A_x = B_x = 0$. So $X \equiv 0$, which is not interesting.

### Final formulation

Finally, we can group the two writings into one, in the following form:

$$X_k(x) = C_x\cos(k\pi x), \forall k \geq 0$$

With :

$$C_x = \begin{cases} B_x & \text{ if } k=0 \cr A_x & \text{ if } k>0 \end{cases}$$

### Solving $(**)$

In the same way we solve the equation $(**)$. From the characteristic equation we find:

$$\Delta = -4\beta$$

We then distinguish 3 cases:

#### Case $\beta > 0$

Thus $\Delta < 0$ and $Y$ is of the form . From the boundary conditions we deduce that:

$$\begin{cases}Y(0) = 0 \Rightarrow A_y = 0\\ Y(1) = 0 \Rightarrow \sin(\sqrt \beta) = 0\end{cases}$$

We deduce that $\beta_k = k^2\pi^2, \forall k>0$ and therefore $Y_k(y) = A_y\sin(k\pi y), \forall k>0$.

#### Case $\beta = 0$

So $\Delta = 0$ and $Y$ is of the form $Y(y) = A_yy + B_y$.

From the boundary conditions we deduce that:

$$\begin{cases}Y(0) = B = 0\ Y(1) = A = 0\end{cases}$$

So $Y \equiv 0$, which is not interesting.

#### Formulation Finale

The solution to $(**)$ is therefore written:

$$Y_k(y) = B_y\sin(k\pi y), \forall k > 0$$

## Rewriting the problem in the Spectral basis

By solving $(*)$ and $(**)$ we deduce that the eigenvalues and eigenvectors of the Laplacian operator are:

$$\begin{cases} w_{k,l}(x,y) = C\sin(l\pi y)\cos(k\pi x)\\ \lambda _k = \pi^2(l^2 + k^2) \end{cases}, \forall k \geq 0$$

According to the Riesz-Fredholm theorem cited above, there exists a Hilbertian basis (orthonormal by definition) of eigenfunctions of $L^2(\Omega)$ belonging to $H^1_0(\Omega)$ and a sequence increasing positive eigenvalues which tend towards infinity.

### Normalization of basis vectors

Before rewriting the problem in the basis of eigenfunctions thus found, let us determine the constant $C$ such that the basis is orthonormal in $L^2(\omega)$.

Let's solve:

$$\begin{matrix} ~ \vert \vert w_{l,k}\vert \vert_{L^2(\Omega)} = 1 & \Leftrightarrow & \int_0^1 \int_0^1 C^2\sin^2(l \pi y) \cos^2(k \pi x) dxdy \cr & \Leftrightarrow & C^2 \underset{\frac{1}{2}}{\underbrace{\int_0^1 \sin^2(l \pi y) dy}} \underset{\frac{1}{2}}{\underbrace{\int_0^1 \cos^2(k \pi x) dx}}\cr & \Leftrightarrow & \begin{cases} C = \sqrt 2 & \text{ if } k=0 \cr C = 2 & \text{ otherwise } \end{cases} \end{matrix}$$

### Rewriting the problem in the spectral basis

We can write our solution in the orthonormal basis found:

$$ w(x,y) = \sum_{l,k}\xi_{l,k}(t)w_{l,k}(x,y)~~ ~ (S)$$

According to the problem statement, $f\in L^2(\omega)$, which implies that it can also be expressed in the found basis:

$$f(x,y) = \sum_{l,k}f_{l,k}(t)w_{l,k}(x,y)$$

With $f_{l,k}(t) = <f(x,y,t), w_{l,k}(x,y)>_{L^2(\omega)}$*, i.e. the projection of $f$ on the vectors of the Hilbertian basis *${w}_{l,k}$.

We can therefore rewrite the initial problem in $w$ as follows:

$$\begin{matrix} ~ & \frac{\partial w}{\partial t} - \Delta w = f(x,y,t) + 1-y(1+e^{-t}(x^2-x+\frac{3}{2})) & \cr \Leftrightarrow & \frac{\partial }{\partial t}\sum_{l,k}\xi_{l,k}(t)w_{l,k}(x,y) - \Delta \sum_{l,k}\xi_{l,k}(t)w_{l,k}(x,y) & \cr & = \sum_{l,k}f_{l,k}(t)w_{l,k}(x,y) + 1-y(1+e^{-t}(x^2-x+\frac{3}{2})) \end{matrix}$$

As normal, for a completely explicit resolution, it would be necessary to project $1-y(1+e^{-t}(x^2-x+\frac{3}{2})$ on the basis of ${L^2(\omega)}$ found, but this is of little interest.

We will simply put $\overset{\sim }{f}(x,y) = f(x,y) + 1-y(1+e^{-t}(x^2-x+\frac{3}{2})$. This allows us to rewrite the problem as follows:

$$\frac{\partial }{\partial t}\sum_{l,k}\xi_{l,k}(t)w_{l,k}(x,y) - \Delta \sum_{l,k}\xi_{l,k}(t)w_{l,k}(x,y) = \sum_{l,k}\overset{\sim }{f}_{l,k}(t)w_{l,k}(x,y)$$

We use the linearity of the differentiation operators to bring the Laplacian and the time derivative into the sum and we use the relation $\Delta w = -\lambda w$ to factor like this:

$$\sum_{l,k}[\xi'_{l,k}(t) - \lambda_{l,k} \xi_{l,k}(t)]w_{l,k}(x,y) = \sum_{l,k}\overset{\sim }{f}_{l,k}(t)w_{l,k}(x,y)$$

Which amounts to solving an ordinary differential equation:

$$\xi'_{l,k}(t) - \lambda_{l,k} \xi_{l,k}(t) = \overset{\sim }{f}_{l,k}(t)$$

## Solving the ordinary differential equation

The resolution of the ODE is done very simply, using the initial conditions to obtain the $\xi_0$ and the explicit expression of the $\overset{\sim }{f}_{l,k}(t) $ to $0$.

The technique that works every time is obviously variation of the constant. I leave it to the reader to make these calculations in order to obtain the $\xi_{l,k}(t)$.

## Final solution

Once the EDO is resolved, the job is complete. If an explicit expression is really desired, simply replace the $w_{l,k}$ with the $\xi_{l,k}$, which explicitly depends on the $\overset{\sim }{f}_{l ,k}$ in the expression of $(S)$.

We could possibly work a little more to give, in a more general framework, the link between the $\xi_{l,k}$, the $\overset{\sim }{f}_{l,k}$ and the own values. Likewise, if the source term $f$ is given explicitly, we must calculate its projection on our basis in order to obtain a completely explicit solution.

### References

[1]: Analyse Numérique et Optimisation, G. Allaire, Chapter 7.

## Member discussion