Finite differences and second quantization

Most recent TBTK release at the time of writing: v1.0.3

Single-particle quantum mechanics is usually formulated using a differential equation such as the Schrödinger equation. The simplest way to solve this numerically is through the use of finite differences. While TBTK is designed for discrete problems formulated on second quantized form, this includes the possibility to work with finite difference schemes. In this post, we, therefore, go through the basic principles of finite differences and how to arrive at a discretized version of the Schrödinger equation. We then describe how to translate this to a second quantized form.

While we focus on the Schrödinger equation, the methods discussed here can be applied to any finite difference scheme and should therefore be of broad interest also beyond quantum mechanics. In particular, the ease with which models are extended beyond one dimension should be of interest for anyone implementing finite difference calculations in general.

Finite differences

First derivative

Finite differences are simple to understand if we remind ourselves of the definition of the derivative of a function,

    \[ \frac{df(x)}{dx} = \lim_{\Delta x\rightarrow 0}\frac{f(x + \Delta x) - f(x - \Delta x)}{2\Delta x}. \]

As the name implies, finite differences result when the derivative is instead approximated by

    \[ \frac{df(x)}{dx} \approx \frac{f(x + \Delta x) - f(x - \Delta x)}{2\Delta x}, \]

for some finite discretization length \Delta x.

The finite difference scheme naturally defines a uniform grid containing the points x_n = n\Delta x for integer n. Moreover, since the finite difference approximation should be understood to be valid for each of these points, it really is a set of linear equations of the form

    \[ \begin{aligned} ... & ...\\ \frac{df(x_n)}{dx} &\approx \frac{f(x_{n+1}) - f(x_{n-1})}{2\Delta x},\\ \frac{df(x_{n+1})}{dx} &\approx \frac{f(x_{n+2}) - f(x_n)}{2\Delta x},\\ ... & ... \end{aligned} \]

Using matrix notation, this can also be expressed as

    \[\left[\begin{array}{c} ...\\ \frac{df(x_n)}{dx}\\ \frac{df(x_{n+1})}{dx}\\ ... \end{array}\right] \approx D^{(1)}\left[\begin{array}{c} ...\\ f(x_n)\\ f(x_{n+1})\\ ... \end{array}\right], \]

where

    \[ D^{(1)} = \frac{1}{2\Delta x}\left[\begin{array}{cccccc} ... & ... & ... & ... & ... & ...\\ ... & -1 & 0 & 1 & 0 & ...\\ ... & 0 & -1 & 0 & 1 & ...\\ ... & ... & ... & ... & ... & ... \end{array}\right].\]

Here -1 and 1 appear right below and above the diagonal, respectively.

For the problem to be possible to put into a computer, the number of discretization points need to be finite, for example \{x_0, ..., x_{N-1}\} for some finite N. The matrix therefore necessarily have to be truncated, which means that the first and last row in fact corresponds to

    \[\begin{aligned} \frac{df(x_{0})}{dx} &\approx \frac{f(x_1) - 0}{2\Delta x},\\ \frac{df(x_{N-1})}{dx} &\approx \frac{0 - f(x_{N-2})}{2\Delta x}. \end{aligned}\]

That is, by simply truncating the matrix we imply the boundary conditions f(x_{-1}) = f(x_{N}) = 0. Other boundary conditions can be added back into the formulation, but for simplicity we will consider these boundary conditions here.

Forward, centered, and backward differences

To be more specific, the finite difference scheme above is called a centered difference since it is formulated symmetrically around the point at which the derivative is approximated. For functions with well defined derivative we could equivalently have formulated the derivative using any of the three expressions

    \[\begin{aligned} \frac{df(x)}{dx} &= \lim_{\Delta x \rightarrow 0}\frac{f(x + \Delta x) - f(x)}{\Delta x},\\ \frac{df(x)}{dx} &= \lim_{\Delta x \rightarrow 0}\frac{f(x + \Delta x) - f(x - \Delta x)}{2\Delta x},\\ \frac{df(x)}{dx} &= \lim_{\Delta x \rightarrow 0}\frac{f(x) - f(x - \Delta x)}{\Delta x}. \end{aligned}\]

However, when working with finite differences the three expressions do differ and are known as the forward, centered, and backward difference, respectively.

If we go through the same steps as for the centered difference above also for the forward and backward differences, we arrive at matrices that are multiplied by a factor 1/\Delta x instead of 1/2\Delta x. Moreover, the two matrices also have the -1 and 1 off-diagonal entries moved onto the diagonal, respectively. This leads to a different type of boundary conditions once the matrix is truncated. However, here we will not be interested in the finite difference expression for the first derivative directly. Rather, the forward and backward differences are of interest to us since they allow us to arrive at a simple expression for the second derivative.

Second derivative

The second derivative of a function is given by

    \[ \frac{d^2f(x)}{dx^2} = \lim_{\Delta x \rightarrow 0}\frac{\frac{df(x + \frac{\Delta x}{2})}{dx} - \frac{df(x - \frac{\Delta x}{2})}{dx}}{\Delta x}. \]

When discretized, the first term in this expression can either be understood as the centered difference at x + \Delta x/2, or as the forward difference at x. Similarly the second term can be understood as either the centered difference at x - \Delta x/2, or as the backward difference at x.

It is helpful to consider the first and second term to be the forward and backward differences, respectively, since this allows us to stay on the lattice x_{n} = n\Delta x. Plugging in the forward and backward differences and removing the limit, we get

    \[ \frac{d^2f(x)}{dx^2} \approx \frac{f(x + \Delta x) - 2f(x) + f(x - \Delta x)}{\Delta x^2}. \]

Rewriting this on matrix form we then arrive at

    \[\left[\begin{array}{c} ...\\ \frac{d^2f(x_n)}{dx^2}\\ \frac{d^2f(x_{n+1})}{dx^2}\\ ... \end{array}\right] \approx D^{(2)}\left[\begin{array}{c} ...\\ f(x_n)\\ f(x_{n+1})\\ ... \end{array}\right], \]

where

    \[ D^{(2)} = \frac{1}{\Delta x^2}\left[\begin{array}{cccccc} ... & ... & ... & ... & ... & ...\\ ... & 1 & -2 & 1 & 0 & ...\\ ... & 0 & 1 & -2 & 1 & ...\\ ... & ... & ... & ... & ... & ... \end{array}\right].\]

Here the -2 entries fall on the diagonal. Truncating this matrix implies the same boundary conditions f(x_{-1}) = f(x_{N}) = 0 as for the centered difference.

Discretizing the Schrödinger equation

We are now ready to discretize the Hamiltonian of the one dimensional Schrödinger equation

    \[ H\Psi(x) = \left(\frac{-\hbar^2}{2m}\frac{d^2}{dx^2} + V(x)\right)\Psi(x). \]

Noting that when \Psi(x) is defined on a discrete lattice, the potential V(x) becomes a diagonal matrix V with V(x_n) along the diagonal. Using this together with the discretized approximation of the second derivative derived above, the discretized Hamiltonian becomes

    \[ H_{D} = \frac{-\hbar^2}{2m}D^{(2)} + V. \]

Second quantized formulation

While the equation above is a matrix equation, TBTK is intended for Hamiltonians on second quantization form, which for single particle physics means

    \[ H_{sq} = \sum_{\mathbf{i}\mathbf{j}}a_{\mathbf{i}\mathbf{j}}c_{\mathbf{i}}^{\dagger}c_{\mathbf{j}}. \]

However, as long as we are concerned with single particle physics, this is really nothing else than a different notation for a matrix with matrix elements a_{\mathbf{i}\mathbf{j}}. That is, \mathbf{i} and \mathbf{j} can be understood as generalized row and column indices, respectively.

To specify a model corresponding to the finite difference equation above in TBTK, all we need to do now is to think of HoppingAmplitude(value, i, j) as another notation for MatrixElement(value, row, column). We can, therefore, read of the matrix elements of our operators -(\hbar^2/2m)D^{(2)} and V directly and feed these to our model. For the diagonal terms, this means \hbar^2/m\Delta x^2 + V(x_n), while for the first off-diagonals we find -\hbar^2/2m\Delta x^2. Separating the two diagonal terms, we can implement the model specification in TBTK as follows.

double t = hbar*hbar/(2*m*dx*dx);

Model model;
for(int x = 0; x < SIZE_X; x++){
    //Add the second derivative.
    model << HoppingAmplitude(2*t, {x}, {x});
    if(x + 1 < SIZE_X){
        model << HoppingAmplitude(
            -t,
            {x + 1},
            {x}
        ) + HC;
    }

    //Add the potential.
    model << V[x];
}
model.construct();

Here the variables hbar, m, dx, and SIZE_X, as well as the array V containing the potential is assumed to already have been defined. Also note that only the HoppingAmplitude corresponding to the lower off-diagonal terms is written out explicitly, while the upper off-diagonal terms are included because of “+ HC” at the end of line 12.

Extention to multiple dimensions

The construction of a multi-dimensional finite difference equation is similar to the procedure described above. However, the function that is being differentiated now depends on several variables that together define a multi-dimensional discrete lattice after discretization. A traditional matrix formulation requires these lattice points to be labeled by a linear index to know which lattice point corresponds to which position in the vectorization of the function, and there are several ways of doing so. The exact form of the derivative operators will, therefore, depend on the convention chosen.

This is a situation in which TBTK shows significant strength since it provides this mapping automatically and allows for the specification of the model using the more natural physical indices. Taking two dimensions as an example, where the physical index is {x, y}, it is simple to extend the one-dimensional case above as follows.

double t_x = hbar*hbar/(2*m*dx*dx);
double t_y = hbar*hbar/(2*m*dy*dy);

Model model;
for(int x = 0; x < SIZE_X; x++){
    for(int y = 0; y < SIZE_Y; y++){
        //Add the second derivative,
        //x-direction.
        model << HoppingAmplitude(
            2*t_x,
            {x, y},
            {x, y}
        );
        if(x + 1 < SIZE_X){
            model << HoppingAmplitude(
                -t_x,
                {x + 1, y},
                {x,     y}
            ) + HC;
        }

        //Add the second derivative,
        //y-direction.
        model << HoppingAmplitude(
            2*t_y,
            {x, y},
            {x, y}
        );
        if(y + 1 < SIZE_Y){
            model << HoppingAmplitude(
                -t_y,
                {x, y + 1},
                {x, y}
            ) + HC;
        }

        //Add the potential.
        model << V[x][y];
    }
}
model.construct();

The code is almost identical to the code for the one-dimensional case, and for clarity, we have here allowed for different step lengths along the x and y directions.

Final words

Finite differences is the most straight forward way to convert a differential equation into a form suitable for numerical calculations. We have only scratched on the surface here by for example only focusing on the boundary conditions f(x_{-1}) = f(x_{N}) = 0. We have also left out higher-order approximations and non-uniform grids. However, we have covered enough material to provide a full understanding of the essential nature of finite differences, as well as demonstrated how TBTK makes it particularly simple to work with these also in higher dimensions.

Since it is so simple to set up finite difference schemes in general using TBTK, it can be of interest beyond the application to quantum mechanics. If you have an interested in finite difference schemes more generally and want to know more about how TBTK can be used to simplify such calculations, then see for example how to set up complex geometries and how to extract the matrix formulation of the problem from the model.

In this post, we have only provided two code fragments to demonstrate the principle. For examples of the application of finite differences to solve concrete quantum mechanical problems, see for example:

Using TBTK to calculate the density of states (DOS) of a 1D, 2D, and 3D square lattice

Direct access to wave function amplitudes and eigenvalues in TBTK

Creating Models with complex geometries using an IndexFilter in TBTK

Dynamically adjustable HoppingAmplitudes using callback functions in TBTK.

One Reply to “Finite differences and second quantization”

Leave a Reply to Frederick Moxley Cancel reply

Your email address will not be published. Required fields are marked *