The band structure of graphene

Most recent TBTK release at the time of writing: v1.1.1

In condensed matter physics, the electronic band structure is one of the most commonly used tools for understanding the electronic properties of a material. Here we take a look at how to set up a tight-binding model of graphene and calculate the band structure along paths between certain high symmetry points in the Brillouin zone. We also calculate the density of states (DOS). A basic understanding of band theory is assumed and if further background is required a good starting point is to look at the nearly free electron model, Bloch waves, Bloch’s theorem, and the Brillouin zone.

Physical description

Lattice

Graphene has a hexagonal lattice that can be described using the lattice vectors \mathbf{r}_0 = a(1, 0, 0), \mathbf{r}_1 = a(-1/2, \sqrt{3}/2, 0),  and \mathbf{r}_2 = a(0, 0, 1). We set the lattice constant to a = 2.5 \textrm{\AA}, which only is meant to reflect the correct order of magnitude. Strictly speaking, only two two-dimensional vectors are required to describe the lattice. However, we have here made the vectors three-dimensional and included a third lattice vector \mathbf{r}_2 that is perpendicular to the lattice to simplify the calculation of the reciprocal lattice vectors. The resulting lattice is shown in the image below.

Indicated in the image is also the unit cell and the A and B sublattice sites within the unit cell. We further define

    \[ \begin{aligned} \mathbf{r}_{AB}^{(0)} =& \frac{\mathbf{r}_0 + 2\mathbf{r}_1}{3},\\ \mathbf{r}_{AB}^{(1)} =& -\mathbf{r}_1 + \mathbf{r}_{AB}^{(0)},\\ \mathbf{r}_{AB}^{(2)} =& -\mathbf{r}_0 - \mathbf{r}_1 + \mathbf{r}_{AB}^{(0)}, \end{aligned} \]

which can be verified to be the vector from A to its three nearest neighbor B sites.

Brillouin zone

The reciprocal lattice vectors can now be calculated using

    \[ \begin{aligned} \mathbf{k}_0 =& \frac{2\pi \mathbf{r}_1\times\mathbf{r}_2}{\mathbf{r}_0\cdot\mathbf{r}_1\times\mathbf{r}_2},\\ \mathbf{k}_1 =& \frac{2\pi \mathbf{r}_2\times\mathbf{r}_0}{\mathbf{r}_1\cdot\mathbf{r}_2\times\mathbf{r}_0},\\ \mathbf{k}_2 =& \frac{2\pi\mathbf{r}_0\times\mathbf{r}_1}{\mathbf{r}_2\cdot\mathbf{r}_0\times\mathbf{r}_1}. \end{aligned} \]

Here \mathbf{k}_2 just like \mathbf{r}_2 is related to the z-direction and will be dropped from here on. The two remaining reciprocal lattice vectors \mathbf{k}_0 and \mathbf{k}_1 defines the reciprocal lattice of interest to us. Below the reciprocal lattice is shown.

In the the image above we have also indicated the first Brillouin zone in red, and outlined the path \Gamma \rightarrow M \rightarrow K \rightarrow \Gamma along which the band structure will be calculated. The high symmetry points can be verified to be given by

    \[ \begin{aligned} \mathbf{\Gamma} =& (0, 0, 0),\\ \mathbf{M} =& (\frac{\pi}{a}, \frac{-\pi}{\sqrt{3}a}, 0),\\ \mathbf{K} =& (\frac{4\pi}{3a}, 0, 0). \end{aligned} \]

Hamiltonian

We will use a simple tight-binding model of graphene of the form

    \[ H = -t\sum_{\mathbf{i}\delta}c_{\mathbf{i},A}^{\dagger}c_{\mathbf{i}+\boldsymbol{\delta},B} + H.c. \]

Here t is a hopping amplitude connecting nearest neighbor sites which we take to be t = 3 eV (again, only correct up to order of magnitude). The index \mathbf{i} runs over the unit cells, while \boldsymbol{\delta} takes on the three values that makes \mathbf{i}+\boldsymbol{\delta},B the nearest neighbor of \mathbf{i},A.

Because of translation invariance, it is possible to transform the expression above to a simple expression in reciprocal space. To do so we write

    \[ \begin{aligned} c_{\mathbf{i},A} =& \frac{1}{\sqrt{N}}\sum_{\mathbf{k}}c_{\mathbf{k},A}e^{ik\cdot \mathbf{R}_{\mathbf{i}}},\\ c_{\mathbf{i}+\boldsymbol\delta},B} =& \frac{1}{\sqrt{N}}\sum_{\mathbf{k}}c_{\mathbf{k},B}e^{i\mathbf{k}\cdot \left(\mathbf{R}_{\mathbf{i}} + \mathbf{r}_{\boldsymbol\delta}\right)}. \end{aligned} \]

Here \mathbf{R}_{\mathbf{i}} is the position of the A atom in unit cell \mathbf{i}, while \mathbf{r}_{\boldsymbol{\delta}} \in \{\mathbf{r}_{AB}^{(0)}, \mathbf{r}_{AB}^{(1)}, \mathbf{r}_{AB}^{(2)}\}.1

Inserting this into the expression above we get

    \[ \begin{aligned} H =& \frac{-t}{N}\sum_{i\mathbf{k}\mathbf{k}'\boldsymbol{\delta}}c_{\mathbf{k},A}^{\dagger}c_{\mathbf{k}',B}e^{-i\mathbf{k}\cdot \mathbf{R}_{\mathbf{i}}}e^{i\mathbf{k}'\cdot\left(\mathbf{R}_{\mathbf{i}} + \mathbf{r}_{\boldsymbol{\delta}}\right)} + H.c\\ =& -t\sum_{\mathbf{k}\boldsymbol{\delta}}c_{\mathbf{k},A}^{\dagger}c_{\mathbf{k},B}e^{i\mathbf{k}\cdot \mathbf{r}_{\boldsymbol{\delta}}} + H.c\\ =& -t\sum_{\mathbf{k}}c_{\mathbf{k},A}^{\dagger}c_{\mathbf{k},B}\left(e^{i\mathbf{k}\cdot\mathbf{r}_{AB}^{(0)}} + e^{i\mathbf{k}\cdot\mathbf{r}_{AB}^{(1)}} + e^{i\mathbf{k}\cdot\mathbf{r}_{AB}^{(2)}}\right) + H.c. \end{aligned} \]

In the second line we have summed over \mathbf{i} and used the delta function property of \sum_{\mathbf{i}}e^{i(\mathbf{k}-\mathbf{k}')\cdot\mathbf{r}_{\mathbf{i}}}/N to eliminat \mathbf{k}', while in the last line we have made the sum over \boldsymbol{\delta} explicit.

Implementation

Parameters

We are now ready to implement the calculation using TBTK. To do so, we begin by specifying the parameters to be used in the calculation

//Set the natural units for this calculation.
UnitHandler::setScales({
        "1 C", "1 pcs", "1 eV",
        "1 Ao", "1 K", "1 s"
});

//Define parameters.
double t = 3;   //eV
double a = 2.5; //Ångström
unsigned int BRILLOUIN_ZONE_RESOLUTION = 1000;
vector<unsigned int> numMeshPoints = {
    BRILLOUIN_ZONE_RESOLUTION,
    BRILLOUIN_ZONE_RESOLUTION
};
const int K_POINTS_PER_PATH = 100;
const double ENERGY_LOWER_BOUND = -10;
const double ENERGY_UPPER_BOUND = 10;
const int ENERGY_RESOLUTION = 1000;

In lines 2-5, we specify what units that dimensionful parameters should be understood in terms of. For our purposes here, this means that energies and distances are measured in electron Volts (eV) and Ångström (Ao), respectively.

We next specify the value for t and a as given in the physical description above. The value ‘BRILLOUIN_ZONE_RESOLUTION’ determines the number of subintervals each of the reciprocal lattice vectors is divided into when generating a mesh for the Brillouin zone. ‘numMeshPoints’ combine this information into a vector. Similarly, ‘K_POINTS_PER_PATH’ determines the number of k-points to use along each of the paths \Gamma \rightarrow M, M \rightarrow K, and K \rightarrow \Gamma when calculating the band structure. Finally, the energy window that is used to calculate the DOS is specified in the three last lines.

Real and reciprocal lattice

The lattice vectors \mathbf{r}_{0}, \mathbf{r}_{1}, and \mathbf{r}_{2} are now defined as follows.

Vector3d r[3];
r[0] = Vector3d({a,     0,              0});
r[1] = Vector3d({-a/2,  a*sqrt(3)/2,    0});
r[2] = Vector3d({0,     0,              a});

The nearest neighbor vectors \mathbf{r}_{AB}^{(0)}, \mathbf{r}_{AB}^{(1)}, and \mathbf{r}_{AB}^{(2)} are similarly defined using

Vector3d r_AB[3];
r_AB[0] = (r[0] + 2*r[1])/3.;
r_AB[1] = -r[1] + r_AB[0];
r_AB[2] = -r[0] - r[1] + r_AB[0];

Next, the reciprocal lattice vectors are calculated using the expressions at the beginning of the Brillouin zone section.

Vector3d k[3];
for(unsigned int n = 0; n < 3; n++){
    k[n] = 2*M_PI*r[(n+1)%3]*r[(n+2)%3]/(
        Vector3d::dotProduct(
            r[n],
            r[(n+1)%3]*r[(n+2)%3]
        )
    );
}

Note that here ‘*’ indicates the cross product, while the scalar product between u and v is calculated using Vector3d::dotProduct(u, v).

Having defined the lattice vectors, we can now construct a Brillouin zone.

BrillouinZone brillouinZone(
    {
        {k[0].x, k[0].y},
        {k[1].x, k[1].y}
    },
    SpacePartition::MeshType::Nodal
);

Here the first argument is a list of lattice vectors, while the second argument specifies the type of mesh that is going to be associated with this Brillouin zone. SpacePartition::MeshType::Nodal means that when the Brillouin zone is divided into small parallelograms, the mesh points will lie on the corners of these parallelograms. If SpacePartition::MeshType::Interior is used instead, the mesh points will lie in the middle of the parallelograms.

Having constructed a Brillouin zone, we finally generate a mesh by passing the number of segments to divide each reciprocal lattice vector to the following function.

vector<vector<double>> mesh
    = brillouinZone.getMinorMesh(
        numMeshPoints
    );

Set up the model

The model is set up using the following code.

Model model;
for(unsigned int n = 0; n < mesh.size(); n++){
    //Get Index representation of the current
    //k-point.
    Index kIndex = brillouinZone.getMinorCellIndex(
        mesh[n],
        numMeshPoints
    );

    //Calculate the matrix element.
    Vector3d k({mesh[n][0], mesh[n][1], 0});
    complex<double> h_01 = -t*(
        exp(
            -i*Vector3d::dotProduct(
                k,
                r_AB[0]
            )
        )
        + exp(
            -i*Vector3d::dotProduct(
                k,
                r_AB[1]
            )
        )
        + exp(
            -i*Vector3d::dotProduct(
                k,
                r_AB[2]
            )
        )
    );

    //Add the matrix element to the model.
    model << HoppingAmplitude(
        h_01,
        {kIndex[0], kIndex[1], 0},
        {kIndex[0], kIndex[1], 1}
    ) + HC;
}
model.construct();

Here the loop runs over each lattice point in the mesh. To understand line 4-7, we first note that the mesh contains k-points of the form (k_x, k_y) with real numbers k_x and k_y. However, to specify a model, we need discrete indices with integer-valued subindices to label the points. Think of this as each k value being of the form \mathbf{k}_{\mathbf{i}}, where \mathbf{k}_{\mathbf{i}} is a real-valued vector while \mathbf{i} is an integer-valued vector that indexes the different k-points. The BrillouinZone solves this problem for us by automatically providing a mapping from continuous indices to discrete indices. Given a real index in the first Brillouin zone and information about the number of subdivisions of the lattice vectors, brillouinZone.getMinorCellIndex() finds the closest discrete point and returns its Index representation. This index can then be used to refer to the k-point whenever an Index object is required.

In line 10, the mesh point is converted to a Vector3d object that is then used in lines 11-30 to calculate the matrix element we derived in the section about the Hamiltonian. The matrix element, together with its Hermitian conjugate, is then fed to the model in lines 33-37. Here we construct the full index for the model by combining the integer kIndex[0] and kIndex[1] values, together with the sublattice index, into an index structure of the form {kx, ky, sublattice}. A zero in the sublattice index refers to an A site, while one refers to a B site. Finally, the model is constructed in line 39.

Selecting solver

We are now ready to solve the model and will use diagonalization to do so. In this case, we have a problem that is block-diagonal in the k-index. For this reason, we will use the solver Solver::BlockDiagonalizer that can take advantage of this block structure. To set up and run the diagonalization we write

Solver::BlockDiagonalizer solver;
solver.setModel(model);
solver.run();

We then set up the corresponding property extractor and configure the energy window using

PropertyExtractor::BlockDiagonalizer
    propertyExtractor(solver);
propertyExtractor.setEnergyWindow(
    ENERGY_LOWER_BOUND,
    ENERGY_UPPER_BOUND,
    ENERGY_RESOLUTION
);

Extracting the DOS

Extracting and plotting the DOS and saving it to file is now done as follows.

Property::DOS dos = propertyExtractor.calculateDOS();
Plotter plotter;
plotter.setLabelX("Energy");
plotter.setLabelY("DOS");
plotter.plot(dos, 0.03);
plotter.save("figures/DOS.png");

Here the second parameter passed to ‘plottter.plot()’ is a broadening term that is used to apply Gaussian smoothing to the DOS before plotting it.

Extract the band structure

We are now ready to calculate the band structure. To do so, we begin by defining the high symmetry points

Vector3d Gamma({0,        0,                 0});
Vector3d M({M_PI/a,       -M_PI/(sqrt(3)*a), 0});
Vector3d K({4*M_PI/(3*a), 0,                 0});

These are then packed in pairs to define the three paths \Gamma \rightarrow M, M \rightarrow K, and K \rightarrow \Gamma

vector<vector<Vector3d>> paths = {
    {Gamma, M},
    {M, K},
    {K, Gamma}
};

We now loop over each of the three paths and calculate the band structure at each k-point along these paths.

Array<double> bandStructure(
    {2, 3*K_POINTS_PER_PATH},
    0
);
Range interpolator(0, 1, K_POINTS_PER_PATH);
for(unsigned int p = 0; p < 3; p++){
    //Select the start and endpoints for the
    //current path.
    Vector3d startPoint = paths[p][0];
    Vector3d endPoint = paths[p][1];

    //Loop over a single path.
    for(
        unsigned int n = 0;
        n < K_POINTS_PER_PATH;
        n++
    ){
        //Interpolate between the paths start
        //and end point.
        Vector3d k = (
            interpolator[n]*endPoint
            + (1 - interpolator[n])*startPoint
        );

        //Get the Index representation of the
        //current k-point.
        Index kIndex = brillouinZone.getMinorCellIndex(
            {k.x, k.y},
            numMeshPoints
        );

        //Extract the eigenvalues for the
        //current k-point.
        bandStructure[
            {0, n+p*K_POINTS_PER_PATH}
        ] = propertyExtractor.getEigenValue(
            kIndex,
            0
        );
        bandStructure[
            {1, n+p*K_POINTS_PER_PATH}
        ] = propertyExtractor.getEigenValue(
            kIndex,
            1
        );
    }
}

Here we first set up the array ‘bandStructure’ that is to contain the band structure with two eigenvalues per k-point. The variable ‘interpolator’ is then defined, which is an array of values from 0 to 1 that will be used to interpolate between the paths start and endpoints. In lines 9 and 10 we then select the start and endpoints for the current path, and the loop over this path begins in line 13.

In lines 20-23, we calculate an interpolated k-point value between the first and last points in the given path. Similarly, as when setting up the model, we then request the Index that corresponds to the given k-point. Having obtained this Index, we extract the lowest eigenvalue for the corresponding block in lines 34-39 and store it in the array ‘bandStructure’. Similarly, the highest eigenvalue is obtained for the same block in lines 40-45.

Next, we calculate the maximum and minimum value of the band structure along the calculated paths. We do this to allow for drawing vertical lines of the correct height at the M and K points when plotting the data.

double min = bandStructure[{0, 0}];
double max = bandStructure[{1, 0}];
for(
    unsigned int n = 0;
    n < 3*K_POINTS_PER_PATH; n++ ){ if(min > bandStructure[{0, n}])
        min = bandStructure[{0, n}];
    if(max < bandStructure[{1, n}])
        max = bandStructure[{1, n}];
}

Finally, we plot the band structure.

plotter.clear();
plotter.setHold(true);
plotter.setLabelX("k");
plotter.setLabelY("Energy");
plotter.plot(bandStructure.getSlice({0, _a_}));
plotter.plot(bandStructure.getSlice({1, _a_}));
plotter.plot(
    {K_POINTS_PER_PATH, K_POINTS_PER_PATH},
    {min, max}
);
plotter.plot(
    {2*K_POINTS_PER_PATH, 2*K_POINTS_PER_PATH},
    {min, max}
);
plotter.save("figures/BandStructure.png ");

Here the ‘_a_’ flag in the call to bandStructure.getSlice({0, _a_}) on line 5 (and 6) is a wildcard. It indicates that we want to get a new array that contains all the elements of the original array that has a zero in the first index and an arbitrary index in the second. ‘a’ stands for ‘all’.

Results

Below the results are shown. The DOS is characteristically v-shaped around E=0 and is peaked 1/3 of the way to the band edge. In the band structure, we can see the Dirac cone at the K-point (the second vertical line). The low density of states at E=0 can be understood as a consequence of the small number of k-points around the K-point for which the energy is close to zero. The cone has a finite slope, and therefore only a single k-point is at E=0 (one more point is at the K'-point, which we have not considered here but which is at another corner of the Brillouin zone). Compare this with the flat region at the M-point (first vertical line), which results in a significant amount of k-points with roughly the same energy. This flat region is responsible for the strong peaks in the DOS.

DOS

Band structure

Full code

The full code is available in src/main.cpp in the project 2019_07_05 of the Second Tech code package. See the README for instructions on how to build and run.

|0> to |1>: Why quantum technology is bound to rise.

You may have noted an increasing activity around quantum technologies in recent years. Giants such as IBM, Microsoft, Google, and Intel are jumping into quantum computing alongside the by now well established D-Wave and more recent startups such as Riggeti, Q-Ctrl, Zapata, etc., etc. The European Union recently launched its €1 billion Quantum Technology Flagship, the US House of Representatives passed a bill to invest more than $1.2 billion in Quantum Information Science, and China is already investing heavily in the sector.

So why does this happen right now? Is fear of missing out driving a hype, or has some fundamental breakthrough suddenly made quantum technologies viable? To understand what is happening, take a look at the figure below.

At the end of Moore’s law.

The difficulties

A lot has been made of the difficulties associated with the end of Moore’s law. For more than half a century, technological progress has been driven by ever smaller and faster computer components.  In 1971 the typical precision with which transistors were manufactured was 10 µm. For current technology, the number is 7 nm, and the target 2-3 years from now is 5 nm. This development is indicated through the “reliable manufacturing” line in the figure above. Further miniaturization is difficult as the fundamental limits of the atoms and their lattice structure is reached at around 0.5 nm.

While it is easy to understand that the atomic size is a fundamental limit, things become tricky already before this. The circuit elements stop working as expected when quantum mechanical effects become important. For example, the confinement energy required to trap an electron within a certain volume element becomes comparable to the typical thermal excitation at room temperature below 10 nm. This effect can only be understood with the help of quantum mechanics.

The fundamental limits that are involved here cause great difficulties for the semiconductor industry as we know it, as future performance improvement has to come from other sources than continued miniaturization.

The emergence of quantum engineering

The line of reliable manufacturing can roughly speaking be thought of as the dividing line between engineering and science. The success and failure rate ratios in the figure are certainly tongue in cheek and not intended to imply that scientists are incompetent or that engineering is easy. Rather, it indicates the difference in time scales with which things can be built and tested. Whatever can be reliably manufactured can be tinkered with and improved upon within a limited time frame, while science is a slower process that moves knowledge forward a small step at a time.

Although semi-classical models that take some quantum mechanical effects into account have played an important role in industry for many decades, detailed quantum mechanical models have largely been a subject reserved for scientific study. Especially when it comes to quantum mechanical models that do not rely on some average over a large number of quantum mechanical particles.1 But with the line of reliable manufacturing crossing into the quantum territory, quantum mechanics is bound to become an engineering discipline. No matter if this means in terms of full-fledged application to quantum computing, or simply to squeeze the last bit of juice out of transistor miniaturization.

The opportunities

The field of computational quantum mechanics has as a consequence of improved execution speed brought on by Moore’s law, as well as improved algorithms, continuously increased the number of atoms that can be faithfully modeled quantum mechanically. As an example, two data points are provided in the figure, quoting the scientific literature on the typical sizes accessible for relatively detailed quantum mechanical modeling (DFT). The length scales in the figure indicate the typical side length of a cube containing the given number of atoms.

With computational quantum mechanics and reliable manufacturing lines crossing each other, both manufacturing and simulation can greatly benefit from each other. Component production can benefit from the ability to simulate models that are faithful to the end product. More interestingly, however, the ability to build exactly the system that is being simulated can provide hugely useful feedback to the computational quantum mechanics community. While electrical engineers may scratch their heads over how to keep increasing transistor count, scientists should be celebrating the dawn of a new experimental era.

So far quantum mechanical simulations and experiments have only been proxies to each other. What has been possible to simulate has been challenging to build, and vice versa. Scientific results have therefore had to be filtered through a fair amount of good judgment when being compared to each other and perfect agreement is seldom expected. What part of the mismatch that is due to the imperfection of the model, and what part is due to deficiencies in the actual solution method, has therefore been complicated to disentangle. With the possibility to reliably produce more or less the exact systems that are being simulated, a better feedback loop can be established that can help guide the development of better numerical methods.

The development of better methods can, in turn, feed back on the semiconductor industry to, for example, develop better transistors or metrology equipment. It can also provide additional benefits in other areas such as quantum chemistry, which relies on similar methods but applied to molecules etc.

While quantum computing is the poster boy of quantum technologies, it is also the more uncertain direction since major unsolved questions remain. But the fact that companies are moving in this direction is not just hype. It is a natural and exciting consequence of the end of Moore’s law that may have us welcome it with open arms rather than dread it. Independently of whether quantum computing proves successful or not, it seems inevitable that quantum technology and quantum engineering is about to rise.

Finite differences and second quantization

Most recent TBTK release at the time of writing: v1.0.3

Single-particle quantum mechanics is usually formulated using a differential equation such as the Schrödinger equation. The simplest way to solve this numerically is through the use of finite differences. While TBTK is designed for discrete problems formulated on second quantized form, this includes the possibility to work with finite difference schemes. In this post, we, therefore, go through the basic principles of finite differences and how to arrive at a discretized version of the Schrödinger equation. We then describe how to translate this to a second quantized form.

While we focus on the Schrödinger equation, the methods discussed here can be applied to any finite difference scheme and should therefore be of broad interest also beyond quantum mechanics. In particular, the ease with which models are extended beyond one dimension should be of interest for anyone implementing finite difference calculations in general.

Finite differences

First derivative

Finite differences are simple to understand if we remind ourselves of the definition of the derivative of a function,

    \[ \frac{df(x)}{dx} = \lim_{\Delta x\rightarrow 0}\frac{f(x + \Delta x) - f(x - \Delta x)}{2\Delta x}. \]

As the name implies, finite differences result when the derivative is instead approximated by

    \[ \frac{df(x)}{dx} \approx \frac{f(x + \Delta x) - f(x - \Delta x)}{2\Delta x}, \]

for some finite discretization length \Delta x.

The finite difference scheme naturally defines a uniform grid containing the points x_n = n\Delta x for integer n. Moreover, since the finite difference approximation should be understood to be valid for each of these points, it really is a set of linear equations of the form

    \[ \begin{aligned} ... & ...\\ \frac{df(x_n)}{dx} &\approx \frac{f(x_{n+1}) - f(x_{n-1})}{2\Delta x},\\ \frac{df(x_{n+1})}{dx} &\approx \frac{f(x_{n+2}) - f(x_n)}{2\Delta x},\\ ... & ... \end{aligned} \]

Using matrix notation, this can also be expressed as

    \[\left[\begin{array}{c} ...\\ \frac{df(x_n)}{dx}\\ \frac{df(x_{n+1})}{dx}\\ ... \end{array}\right] \approx D^{(1)}\left[\begin{array}{c} ...\\ f(x_n)\\ f(x_{n+1})\\ ... \end{array}\right], \]

where

    \[ D^{(1)} = \frac{1}{2\Delta x}\left[\begin{array}{cccccc} ... & ... & ... & ... & ... & ...\\ ... & -1 & 0 & 1 & 0 & ...\\ ... & 0 & -1 & 0 & 1 & ...\\ ... & ... & ... & ... & ... & ... \end{array}\right].\]

Here -1 and 1 appear right below and above the diagonal, respectively.

For the problem to be possible to put into a computer, the number of discretization points need to be finite, for example \{x_0, ..., x_{N-1}\} for some finite N. The matrix therefore necessarily have to be truncated, which means that the first and last row in fact corresponds to

    \[\begin{aligned} \frac{df(x_{0})}{dx} &\approx \frac{f(x_1) - 0}{2\Delta x},\\ \frac{df(x_{N-1})}{dx} &\approx \frac{0 - f(x_{N-2})}{2\Delta x}. \end{aligned}\]

That is, by simply truncating the matrix we imply the boundary conditions f(x_{-1}) = f(x_{N}) = 0. Other boundary conditions can be added back into the formulation, but for simplicity we will consider these boundary conditions here.

Forward, centered, and backward differences

To be more specific, the finite difference scheme above is called a centered difference since it is formulated symmetrically around the point at which the derivative is approximated. For functions with well defined derivative we could equivalently have formulated the derivative using any of the three expressions

    \[\begin{aligned} \frac{df(x)}{dx} &= \lim_{\Delta x \rightarrow 0}\frac{f(x + \Delta x) - f(x)}{\Delta x},\\ \frac{df(x)}{dx} &= \lim_{\Delta x \rightarrow 0}\frac{f(x + \Delta x) - f(x - \Delta x)}{2\Delta x},\\ \frac{df(x)}{dx} &= \lim_{\Delta x \rightarrow 0}\frac{f(x) - f(x - \Delta x)}{\Delta x}. \end{aligned}\]

However, when working with finite differences the three expressions do differ and are known as the forward, centered, and backward difference, respectively.

If we go through the same steps as for the centered difference above also for the forward and backward differences, we arrive at matrices that are multiplied by a factor 1/\Delta x instead of 1/2\Delta x. Moreover, the two matrices also have the -1 and 1 off-diagonal entries moved onto the diagonal, respectively. This leads to a different type of boundary conditions once the matrix is truncated. However, here we will not be interested in the finite difference expression for the first derivative directly. Rather, the forward and backward differences are of interest to us since they allow us to arrive at a simple expression for the second derivative.

Second derivative

The second derivative of a function is given by

    \[ \frac{d^2f(x)}{dx^2} = \lim_{\Delta x \rightarrow 0}\frac{\frac{df(x + \frac{\Delta x}{2})}{dx} - \frac{df(x - \frac{\Delta x}{2})}{dx}}{\Delta x}. \]

When discretized, the first term in this expression can either be understood as the centered difference at x + \Delta x/2, or as the forward difference at x. Similarly the second term can be understood as either the centered difference at x - \Delta x/2, or as the backward difference at x.

It is helpful to consider the first and second term to be the forward and backward differences, respectively, since this allows us to stay on the lattice x_{n} = n\Delta x. Plugging in the forward and backward differences and removing the limit, we get

    \[ \frac{d^2f(x)}{dx^2} \approx \frac{f(x + \Delta x) - 2f(x) + f(x - \Delta x)}{\Delta x^2}. \]

Rewriting this on matrix form we then arrive at

    \[\left[\begin{array}{c} ...\\ \frac{d^2f(x_n)}{dx^2}\\ \frac{d^2f(x_{n+1})}{dx^2}\\ ... \end{array}\right] \approx D^{(2)}\left[\begin{array}{c} ...\\ f(x_n)\\ f(x_{n+1})\\ ... \end{array}\right], \]

where

    \[ D^{(2)} = \frac{1}{\Delta x^2}\left[\begin{array}{cccccc} ... & ... & ... & ... & ... & ...\\ ... & 1 & -2 & 1 & 0 & ...\\ ... & 0 & 1 & -2 & 1 & ...\\ ... & ... & ... & ... & ... & ... \end{array}\right].\]

Here the -2 entries fall on the diagonal. Truncating this matrix implies the same boundary conditions f(x_{-1}) = f(x_{N}) = 0 as for the centered difference.

Discretizing the Schrödinger equation

We are now ready to discretize the Hamiltonian of the one dimensional Schrödinger equation

    \[ H\Psi(x) = \left(\frac{-\hbar^2}{2m}\frac{d^2}{dx^2} + V(x)\right)\Psi(x). \]

Noting that when \Psi(x) is defined on a discrete lattice, the potential V(x) becomes a diagonal matrix V with V(x_n) along the diagonal. Using this together with the discretized approximation of the second derivative derived above, the discretized Hamiltonian becomes

    \[ H_{D} = \frac{-\hbar^2}{2m}D^{(2)} + V. \]

Second quantized formulation

While the equation above is a matrix equation, TBTK is intended for Hamiltonians on second quantization form, which for single particle physics means

    \[ H_{sq} = \sum_{\mathbf{i}\mathbf{j}}a_{\mathbf{i}\mathbf{j}}c_{\mathbf{i}}^{\dagger}c_{\mathbf{j}}. \]

However, as long as we are concerned with single particle physics, this is really nothing else than a different notation for a matrix with matrix elements a_{\mathbf{i}\mathbf{j}}. That is, \mathbf{i} and \mathbf{j} can be understood as generalized row and column indices, respectively.

To specify a model corresponding to the finite difference equation above in TBTK, all we need to do now is to think of HoppingAmplitude(value, i, j) as another notation for MatrixElement(value, row, column). We can, therefore, read of the matrix elements of our operators -(\hbar^2/2m)D^{(2)} and V directly and feed these to our model. For the diagonal terms, this means \hbar^2/m\Delta x^2 + V(x_n), while for the first off-diagonals we find -\hbar^2/2m\Delta x^2. Separating the two diagonal terms, we can implement the model specification in TBTK as follows.

double t = hbar*hbar/(2*m*dx*dx);

Model model;
for(int x = 0; x < SIZE_X; x++){
    //Add the second derivative.
    model << HoppingAmplitude(2*t, {x}, {x});
    if(x + 1 < SIZE_X){
        model << HoppingAmplitude(
            -t,
            {x + 1},
            {x}
        ) + HC;
    }

    //Add the potential.
    model << V[x];
}
model.construct();

Here the variables hbar, m, dx, and SIZE_X, as well as the array V containing the potential is assumed to already have been defined. Also note that only the HoppingAmplitude corresponding to the lower off-diagonal terms is written out explicitly, while the upper off-diagonal terms are included because of “+ HC” at the end of line 12.

Extention to multiple dimensions

The construction of a multi-dimensional finite difference equation is similar to the procedure described above. However, the function that is being differentiated now depends on several variables that together define a multi-dimensional discrete lattice after discretization. A traditional matrix formulation requires these lattice points to be labeled by a linear index to know which lattice point corresponds to which position in the vectorization of the function, and there are several ways of doing so. The exact form of the derivative operators will, therefore, depend on the convention chosen.

This is a situation in which TBTK shows significant strength since it provides this mapping automatically and allows for the specification of the model using the more natural physical indices. Taking two dimensions as an example, where the physical index is {x, y}, it is simple to extend the one-dimensional case above as follows.

double t_x = hbar*hbar/(2*m*dx*dx);
double t_y = hbar*hbar/(2*m*dy*dy);

Model model;
for(int x = 0; x < SIZE_X; x++){
    for(int y = 0; y < SIZE_Y; y++){
        //Add the second derivative,
        //x-direction.
        model << HoppingAmplitude(
            2*t_x,
            {x, y},
            {x, y}
        );
        if(x + 1 < SIZE_X){
            model << HoppingAmplitude(
                -t_x,
                {x + 1, y},
                {x,     y}
            ) + HC;
        }

        //Add the second derivative,
        //y-direction.
        model << HoppingAmplitude(
            2*t_y,
            {x, y},
            {x, y}
        );
        if(y + 1 < SIZE_Y){
            model << HoppingAmplitude(
                -t_y,
                {x, y + 1},
                {x, y}
            ) + HC;
        }

        //Add the potential.
        model << V[x][y];
    }
}
model.construct();

The code is almost identical to the code for the one-dimensional case, and for clarity, we have here allowed for different step lengths along the x and y directions.

Final words

Finite differences is the most straight forward way to convert a differential equation into a form suitable for numerical calculations. We have only scratched on the surface here by for example only focusing on the boundary conditions f(x_{-1}) = f(x_{N}) = 0. We have also left out higher-order approximations and non-uniform grids. However, we have covered enough material to provide a full understanding of the essential nature of finite differences, as well as demonstrated how TBTK makes it particularly simple to work with these also in higher dimensions.

Since it is so simple to set up finite difference schemes in general using TBTK, it can be of interest beyond the application to quantum mechanics. If you have an interested in finite difference schemes more generally and want to know more about how TBTK can be used to simplify such calculations, then see for example how to set up complex geometries and how to extract the matrix formulation of the problem from the model.

In this post, we have only provided two code fragments to demonstrate the principle. For examples of the application of finite differences to solve concrete quantum mechanical problems, see for example:

Using TBTK to calculate the density of states (DOS) of a 1D, 2D, and 3D square lattice

Direct access to wave function amplitudes and eigenvalues in TBTK

Creating Models with complex geometries using an IndexFilter in TBTK

Dynamically adjustable HoppingAmplitudes using callback functions in TBTK.

Dynamically adjustable HoppingAmplitudes using callback functions in TBTK

Most recent TBTK release at the time of writing: v1.0.3

In this blog post we will take a look at how to specify models without necessarily knowing all the model parameters at the point of specification. This can be very useful when the Hamiltonian is time dependent or depends on some parameter that is to be solved for self-consistently.

We will here consider a third situation in which adjustable HoppingAmplitudes can be utilized by setting up a model with a variable potential. We will use this to repeatedly calculate the probability density and eigenvalues for the lowest energy states for seven different types of one-dimensional potentials. Namely, the infinite square well, square well, double square well, harmonic oscillator, double well, and step potential, as well as a potential with two regions with different potential separated by a barrier.

Parameters

The parameters that are needed are the number of sites to use for the discretized Schrödinger equation, the number of states for which we are going to calculate the probability density, and the nearest neighbor hopping amplitude t. We define these as follows.

//Number of sites and states to extract the
//probability density for.
const int SIZE_X = 500;
const int NUM_STATES = 7;
double t = 1;

We further create an enum that will be used to specify the type of the potential.

//Enum type for specifying the type of the
//potential.
enum PotentialType{
	InfiniteSquareWell,
	SquareWell,
	DoubleSquareWell,
	HarmonicOscillator,
	DoubleWell,
	Step,
	Barrier
};

//The type of the potential.
PotentialType potentialType;

Callback function

Normally a HoppingAmplitude is specified as

HoppingAmplitude(value, toIndex, formIndex);

However, it is also possible to pass a function instead of a value as first argument. This function will later be called by the HoppingAmplitude to determine the value, passing the toIndex and fromIndex as arguments. That is, the HoppingAmplitude will “call back” later to determine the value.

For a function to classify as a valid callback function, it needs to take two indices as arguments and return a complex. In addition, it also needs to be able to handle every pair of indices that it will be called with. That is, every pair of indices which it is packed together with into a HoppingAmplitude when the model is specified.

In our case we will deal with a one-dimensional system and since the callback is used for the potential term, it will only be packed together with indices that are equal with each other. When implementing the callback for the potential, we therefore only need to consider one of the two indices and can extract the position from its first (and only) subindex.

//Callback that dynamically returns the
//current potential on a given site.
complex<double> potential(
	const Index &toIndex,
	const Index &fromIndex
){
	int x = fromIndex[0];

	switch(potentialType){
	case InfiniteSquareWell:
		return infiniteSquareWell(x);
	case SquareWell:
		return squareWell(x);
	case DoubleSquareWell:
		return doubleSquareWell(x);
	case HarmonicOscillator:
		return harmonicOscillator(x);
	case DoubleWell:
		return doubleWell(x);
	case Step:
		return step(x);
	case Barrier:
		return barrier(x);
	default:
		Streams::out << "Error. This should never happen.";
		exit(1);
	}
}

After having extracted the position from the fromIndex, this function calls one of seven potential functions depending on the current potentialType to determine the value of the potential at the given point. The implementation of the potential functions is not explicitly shown here, but can be found in the complete example linked from the bottom of this post.

Model

The model that we will consider is the discretized one-dimensional Schrödinger equation which we specify as follows.1

//Create the Model.
Model model;
for(unsigned int x = 0; x < SIZE_X; x++){
	//Kinetic terms.
	model << HoppingAmplitude(
		2*t,
		{x},
		{x}
	);
	if(x + 1 < SIZE_X){
		model << HoppingAmplitude(
			-t,
			{x + 1},
			{x}
		) + HC;
	}

	//Potential term.
	model << HoppingAmplitude(
		potential,
		{x},
		{x}
	);
}
model.construct();

The kinetic terms result from discretizing the Laplace operator and letting t = \hbar^2/2mdx^2, while the potential term has been added by passing the callback function potential() as the first parameter to the HoppingAmplitude.

Set up the Solver and PropertyExtractor

We next set up the Solver and PropertyExtractor as follows.

//Setup the Solver and PropertyExtractor.
Solver::Diagonalizer solver;
solver.setModel(model);
PropertyExtractor::Diagonalizer
	propertyExtractor(solver);

The main loop

In the main loop, we will carry out the same calculation for each potential. We therefore begin by creating a list of potential types to perform the calculations for, and a list of filenames to which to save the results.

//List of potentials to run the calculation
//for.
vector<PotentialType> potentialTypes = {
	InfiniteSquareWell,
	SquareWell,
	DoubleSquareWell,
	HarmonicOscillator,
	DoubleWell,
	Step,
	Barrier
};

//List of filenames to save the results to.
vector<string> filenames = {
	"figures/InfiniteSquareWell.png",
	"figures/SquareWell.png",
	"figures/DoubleSquareWell.png",
	"figures/HarmonicOscillator.png",
	"figures/DoubleWell.png",
	"figures/Step.png",
	"figures/Barrier.png"
};

The main loop itself is as follows.

//Run the calculation and plot the result for
//each potential.
for(
	unsigned int n = 0;
	n < potentialTypes.size();
	n++
){
	//Set the current potential.
	potentialType = potentialTypes[n];

	//Run the solver.
	solver.run();

	//Calculate the probability density
	//for the first NUM_STATES.
	Array<double> probabilityDensities(
		{NUM_STATES, SIZE_X}
	);
	for(
		unsigned int state = 0;
		state < NUM_STATES;
		state++
	){
		for(
			unsigned int x = 0;
			x < (unsigned int)SIZE_X;
			x++
		){
			complex<double> amplitude
				= propertyExtractor.getAmplitude(
					state,
					{x}
				);
			probabilityDensities[{state, x}]
				= pow(abs(amplitude), 2);
		}
	}

	//Calculate the eigenvalues.
	Property::EigenValues eigenValues
		= propertyExtractor.getEigenValues();

	//Plot the results.
	plot(
		probabilityDensities,
		eigenValues,
		filenames[n]
	);
}

Here we loop over each type of potential and set the global parameter potentialType to the current potential in line 9. The solver is then executed, which results in the HoppingAmplitudes being requested internally, which triggers the callback functions to be called and return the values for the current potential. Next, the probability density is calculated2, followed by the calculation of the eigenvalues.

Finally, the results are plotted and saved to file. The plotting routine involves significant post processing of the data and the full code for achieving this can be found in the code linked below. In particular, the probability densities are first rescaled such that NUM_STATES probability densities can be stacked evenly on top of each other without overlapping or extending beyond the plotting window. Then the probability densities are shifted by the respective eigenvalue of the corresponding state to simultaneously convey information about the probability density and the eigenvalues in relation to the potential.

Results

Below the results are presented. The thick red line indicates the potential, while the black lines are the probability densities. Each probability density has been offset by the corresponding states eigenvalue to put its energy in relation to the potential. The eigenvalues are also plotted with thin red lines. In agreement with the basic principles of quantum mechanics, we see how the states are oscillating in the regions where the potential is smaller than the energy of the state, while they decay exponentially into the regions where the potential is larger.

Infinite square well

Square well

Double square well

Harmonic oscillator

Double well

Step

Barrier

Full code

Full code is available in src/main.cpp in the project 2018_11_07 of the Second Tech code package. See the README for instructions on how to build and run.

Retrieving the Hamiltonian from the Model in TBTK

Most recent TBTK release at the time of writing: v1.0.3

One of the most powerful aspects of TBTK is that it allows for models to be specified and properties to be extracted using physical indices such as {x, y, z, spin}. While this greatly simplifies the modeling process, the other side of the coin is the equally powerful means that are provided for method developers to write efficient general purpose code. In this post we will take a look at how a method developer typically would go about extracting the Hamiltonian from a Model and convert it to a format that is most efficient for the algorithm at hand.

Model

Before moving on to the details of how to extract the Hamiltonian from the model, we here provide the code that sets up the example model that will be used in this tutorial.

//Parameters.
const int SIZE_X = 4;
const int SIZE_Y = 3;
double t = 1;

//Create the Model.
Model model;
for(unsigned int x = 0; x < SIZE_X; x++){
	for(unsigned int y = 0; y < SIZE_Y; y++){
		model << HoppingAmplitude(
			4*t,
			{x, y},
			{x, y}
		);
		if(x + 1 < SIZE_X){
			model << HoppingAmplitude(
				-t,
				{x + 1,	y},
				{x,	y}
			) + HC;
		}
		if(y + 1 < SIZE_Y){
			model << HoppingAmplitude(
				-t,
				{x, y + 1},
				{x, y}
			) + HC;
		}
	}
}
model.construct();

For more details on the model specification itself, see for example Using TBTK to calculate the density of states (DOS) of a 1D, 2D, and 3D square lattice and Direct access to wave function amplitudes and eigenvalues in TBTK.

Comments about HoppingAmplitudes

HoppingAmplitudes can generally be considered as matrix elements \langle \Psi_{n}|H|\Psi_{m}\rangle. While it from an algorithmic standpoint is very useful to consider |\Psi_{m}\rangle to be a vector with a linear index m, it is from an intuitive point of view more natural to work with states using a notation such as |\Psi_{s}(x, y)\rangle. In TBTK notation where both coordinates and indices necessarily are discrete, this can be encoded as {x, y, s}.

What the model.construct() call does for the model is to setup a one to one mapping between the physical indices and corresponding linear indices. That is, if for example {0, 0} is assigned the linear index 0, while {0, 1} is assigned the linear index 1, we can think of the following three expressions as equivalent1

    \[ \textrm{HoppingAmplitude(-t, \{0, 0\}, \{0, 1\})}, \]

    \[ \langle\Psi_{\{0, 0\}}|H|\Psi_{\{0, 1\}}\rangle = -t, \]

    \[ \langle\Psi_{0}|H|\Psi_{1}\rangle = -t. \]

HoppingAmplitudeSet

Inside the Model, the HoppingAmplitudes are stored in a structure called a HoppingAmplitudeSet, which can be considered a sparse matrix format that is tailored for quantum mechanics. It allows application developers to label the rows and columns of the matrix with physical indices, while simultaneously providing method developers with linear indices that runs from 0 to N-1, where N is the basis size.

We can get the HoppingAmplitudeSet from the Model and extract the basis size as follows.

//Get the HoppingAmplitudeSet from the Model
//and extract the basis size.
const HoppingAmplitudeSet &hoppingAmplitudeSet
	= model.getHoppingAmplitudeSet();
unsigned int basisSize
	= hoppingAmplitudeSet.getBasisSize();

Set up the Hamiltonian

While the HoppingAmplitudeSet can be considered a sparse storage format for the HoppingAmplitudes, it is often not the most suitable format to use in a computationally demanding algorithm. Solution algorithms are usually where most of the execution time is spent, and the time they take to execute usually scales super linearly with the system size. It is therefore usually most reasonable to first convert the Hamiltonian to a format that is most suitable for the algorithm at hand. One of the core strengths of TBTK is that this conversion can be done very efficiently in a way that scale linearly with the system size.

In this example we will simply print the Hamiltonian once we have set it up. For simplicity we will therefore create a dense array in which we store the Hamiltonian.

//Initialize the Hamiltonian on a format most
//suitable for the algorithm at hand.
Array<complex<double>> hamiltonian(
	{basisSize, basisSize},
	0.
);

Next we iterate over all of the HoppingAmplitudes in the HoppingAmplitudeSet as follows.

//Iterate over the HoppingAmplitudes.
for(
	HoppingAmplitudeSet::ConstIterator iterator
		= hoppingAmplitudeSet.cbegin();
	iterator != hoppingAmplitudeSet.cend();
	++iterator
){
	//Extract the amplitude and physical
	//indices from the HoppingAmplitude.
	complex<double> amplitude
		= (*iterator).getAmplitude();
	const Index &toIndex
		= (*iterator).getToIndex();
	const Index &fromIndex
		= (*iterator).getFromIndex();

	//Convert the physical indices to linear
	//indices.
	unsigned int row
		= hoppingAmplitudeSet.getBasisIndex(
			toIndex
		);
	unsigned int column
		= hoppingAmplitudeSet.getBasisIndex(
			fromIndex
		);

	//Write the amplitude to the Hamiltonian
	//that will be used in this algorithm.
	hamiltonian[{row, column}] += amplitude;
}

In line 10-15 we first extract the amplitude and the physical indices from the HoppingAmplitude. We then convert the physical indices to linear indices using hoppingAmplitudeSet.getBasisIndex(). Once this is done, we have the amplitude of the matrix element, as well as the linear row and column indices, and we can therefore write it to our dense array at the end of the loop body.2

Having set up the Hamiltonian we finally print it to verify that everything worked as expected.

//Print the Hamiltonian.
for(unsigned int row = 0; row < basisSize; row++){
	for(unsigned int column = 0; column < basisSize; column++){
		Streams::out << real(hamiltonian[{row, column}])
			<< "\t";
	}
	Streams::out << "\n";
}

Result

Executing the application we get the following print out.

4 -1 0 -1 0 0 0 0 0 0 0 0
-1 4 -1 0 -1 0 0 0 0 0 0 0
0 -1 4 0 0 -1 0 0 0 0 0 0
-1 0 0 4 -1 0 -1 0 0 0 0 0
0 -1 0 -1 4 -1 0 -1 0 0 0 0
0 0 -1 0 -1 4 0 0 -1 0 0 0
0 0 0 -1 0 0 4 -1 0 -1 0 0
0 0 0 0 -1 0 -1 4 -1 0 -1 0
0 0 0 0 0 -1 0 -1 4 0 0 -1
0 0 0 0 0 0 -1 0 0 4 -1 0
0 0 0 0 0 0 0 -1 0 -1 4 -1
0 0 0 0 0 0 0 0 -1 0 -1 4

The validity of this output can be verified with the additional knowledge that in this specific case, the mapping between the physical indices and the linear Hilbert space indices is given by h = SIZE_Y*x + y.

Full code

Full code is available in src/main.cpp in the project 2018_11_04 of the Second Tech code package. See the README for instructions on how to build and run.

Creating Models with complex geometries using an IndexFilter in TBTK

Most recent TBTK release at the time of writing: v1.0.3

One of the core strengths of TBTK is the generality of the quantum systems that can be modeled and the ease with which such models can be created. In this post we will take a look at how the use of an IndexFilter can simplify this process further. In particular, we will show how to setup the Schrödinger equation on a simple annulus shaped geometry and how to calculate the energy and probability density for a given state.

In addition to showing how to use an IndexFilter, this post also provides a simple example of quantum mechanics in polar coordinates. The solutions we arrive at show similarities with the s, p, and d orbitals, and the radial wave functions that appears as solutions to the Schrödinger equation in a central potential (the Hydrogen atom). This without the need of introducing a spatially varying potential. As such it can give useful insights into aspects of quantum mechanics that can be difficult to grasp from the derivation of the solutions to the full fledged Schrödinger equation in a central potential.

The material closely parallels the earlier post Direct access to wave function amplitudes and eigenvalues in TBTK, which provide more details on some of the content that is treated more briefly here.

Model

The model that we consider here is the Hamiltonian

    \[ H = -t\sum_{\langle ij\rangle}c_{i}^{\dagger}c_{j}, \]

on an annulus with inner radius R_i and outer radius R_{o}. Here \langle ij\rangle denotes summation over nearest neighbors and the Hamiltonian can be considered to be the (square lattice) discretized version of the Schrödinger equation with

    \[\begin{aligned} H_{S} &= \frac{-\hbar^2}{2m}\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right) + V(x,y)\\ &= \frac{-\hbar^2}{2m}\left(\frac{1}{r}\frac{\partial}{\partial r}\left(r\frac{\partial}{\partial r}\right) + \frac{1}{r^2}\frac{\partial^2}{\partial \theta^2}\right) + V(r, \theta), \end{aligned}\]

with \hbar^2/2mdx^2 = \hbar^2/2mdy^2 = t and V(x, y) = V(r, \theta) = -4.

Parameters

To set up this model we first specify the following parameters.

//Parameters.
const unsigned int SIZE = 41;
const unsigned int SIZE_X = SIZE;
const unsigned int SIZE_Y = SIZE;
const double OUTER_RADIUS = SIZE/2;
const double INNER_RADIUS = SIZE/8;
double t = 1;
int state = 0;

The first three parameters specifies an underlying 41\times 41 grid. The fourth and fifth parameter defines the outer and inner radius of the annulus and we also set t = 1. The last parameter is used to indicate for which state we are going to calculate the energy and probability density.

IndexFilter

The basic idea behind an IndexFilter is to allow for geometry specific information to be separated from the specification of the Hamiltonian.
It is possible to add if-statements in the model specification to guard against the addition of HoppingAmplitudes to sites that are not supposed to be included. However, allowing for such elements to be passed to the Model and instead use a filter to exclude invalid terms results in cleaner and less error prone code.

To create an IndexFilter we need to inherit from the class AbstractIndexFilter and implement the functions clone() and isIncluded().

//IndexFilter.
class MyIndexFilter
	: public AbstractIndexFilter{
public:
	//Implements AbstractIndexFilter::clone().
	MyIndexFilter* clone() const{
		return new MyIndexFilter();
	}

	//Implements
	//AbstractIndexFilter::isIncluded().
	bool isIncluded(const Index &index) const{
		//Extract x and y from the Index.
		int x = index[0];
		int y = index[1];

		//Calculate the distance from the
		//center.
		double r = sqrt(
			pow(abs(x - (int)SIZE_X/2), 2)
			+ pow(abs(y - (int)SIZE_Y/2), 2)
		);

		//Return true if the distance is less
		//than the outer radius of the annulus,
		//but larger than the inner radius.
		if(r < OUTER_RADIUS && r > INNER_RADIUS)
			return true;
		else
			return false;
	}
};

For all basic use cases clone() can be implemented as above and we will therefore leave this function without further comments.

The first thing to note about isIncluded() is that it needs to be ready to accept any Index that is passed to the Model and return true or false depending on whether the Index is included or not. This means that the filter writer needs to be aware what type of indices that can be expected to be added to the model. In our case this is simple, our indices will all be of the form {x, y}. We can therefore immediately read of the x and y value from the first and second subindex. The distance r from the center of the grid is then calculated and true is returned if R_i < r < R_o.

Create the Model

Having created the IndexFilter we are now ready to set up the actual model.

	//Create filter.
	MyIndexFilter filter;

	//Create the Model.
	Model model;
	model.setFilter(filter);
	for(unsigned int x = 0; x < SIZE_X; x++){
		for(unsigned int y = 0; y < SIZE_Y; y++){
			model << HoppingAmplitude(
				-t,
				{x + 1,	y},
				{x,     y}
			) + HC;
			model << HoppingAmplitude(
				-t,
				{x, y + 1},
				{x, y}
			) + HC;
		}
	}
	model.construct();

Compare this to the model creation in Direct access to wave function amplitudes and eigenvalues in TBTK. The first difference is the creation of the filter in the second line and the addition of this filter to the Model in the sixth line. Other than this the only difference is the absence of if-statements inside the loop. By using an IndexFilter we have made the annulus even simpler to setup than the square grid!

Solver

We are now ready to set up and run the solver.

//Setup and run the Solver.
Solver::Diagonalizer solver;
solver.setModel(model);
solver.run();

Extract the eigenvalue probability density

The code for extracting the eigenvalue (energy) and probability density and writing this to file is almost identical to the code in Direct access to wave function amplitudes and eigenvalues in TBTK.

//Setup the PropertyExtractor.
PropertyExtractor::Diagonalizer
	propertyExtractor(solver);

//Print the eigenvalue for the given state.
Streams::out << "The energy of state "
	<< state << " is "
	<< propertyExtractor.getEigenValue(state)
	<< "\n";

//Calculate the probability density for the
//given state.
Array<double> probabilityDensity(
	{SIZE_X, SIZE_Y},
	0
);
for(unsigned int x = 0; x < SIZE_X; x++){
	for(unsigned int y = 0; y < SIZE_Y; y++){
		if(!filter.isIncluded({x, y}))
			continue;
		//Get the probability amplitude at
		//site (x, y) for the
		//given state.
		complex<double> amplitude
			= propertyExtractor.getAmplitude(
				state,
				{x, y}
			);

		//Calculate the probability density.
		probabilityDensity[{x, y}] = pow(
			abs(amplitude),
			2
		);
	}
}

//Plot the probability density.
Plotter plotter;
plotter.plot(probabilityDensity);
plotter.save("figures/ProbabilityDensity.png");

There are only two minor difference, of which the first is the zero as second argument in the expression

Array<double> probabilityDensity(
	{SIZE_X, SIZE_Y},
	0
);

This is used to initialize every element to zero to make sure that elements outside of the annulus are not left uninitialized. The second difference is the appearance of the following two lines inside the loop.

if(!filter.isIncluded({x, y}))
	continue;

Here the IndexFilter is used once more to check whether a given Index is included in the Model or not. If not, we skip the rest of the loop body for this particular Index to avoid requesting data that does not exist.

Results

Below we present the energies and probability densities for the eleven first states.

E_0 = -3.96

E_{1,2} = -3.95

E_{3,4} = -3.93

E_{5,6} = -3.90

E_{7,8} = -3.86

E_{9} = -3.84

E_{10,11} = -3.83

To understand these results we note that through separation of variables, the polar form of H_s can be seen to be solved by a function of the form

    \[ f_{n,m}(r, \theta) = f_{n}(r)e^{im\theta}. \]

Here the radial function needs to satisfy the boundary conditions f_{n}(R_{i}) = f_{n}(R_o) = 0. Moreover, it can be verified by application of H_{s} to f_{n,m}(r,\theta) that f_{n,m}(r, \theta) and f_{n,-m}(r, \theta) are solutions with the same energy. In the continuous case we therefore expect non-degenerate eigenvalues for m = 0 and doubly degenerate eigenvalues for m \neq 0. For the radial solutions we can further expect a set of increasingly oscillating functions with increasing n.

The numerical results are indeed in agreement with this observation. The 0th state displays a simple structure with a single radial oscillation and m = 0. State 1-8 appear in pairs with increasing wave number m along the \theta direction. After state 8 it is no longer energetically favorable to keep increasing the angular wave number, but instead the next energy state is found by once again returning to the non-degenerate case m=0, but to increase the radial wave number by one. This is then followed by another set of degenerate states with two radial oscillations and m = \pm 1.

These observations are only approximate though, which we realize by noting that the probability density |f_{n,m}(r, \theta)|^2 actually is a constant function of \theta for all m. To understand why we see oscillations also in the probability density, we first note that

    \[ &\frac{1}{\sqrt{2}}\left(f_{n,m}(r, \theta) + f_{n,-m}(r, \theta)\right) = \sqrt{2}f_{n}(r)\cos(im\theta) \]

and

    \[ &\frac{1}{\sqrt{2}}\left(f_{n,m}(r, \theta) + f_{n,-m}(r, \theta)\right) = \sqrt{2}if_{n}(r)\sin(im\theta) \]

are equally valid eigenstates and display the spatial variation also for the probability density. Moreover, the degeneracy is only approximate in the discrete case, resulting in the true eigenstates resembling the later set more closely. Why the spatially modulated functions are preferred once the degeneracy is lifted can likely be understood by considering that spatial modulation allows the wave function of the lower energy state to avoid the areas that result in an increasing energy.

We finally note how the geometry of the problem results in solutions that naturally separate into a radial wave function multiplied by an angular wave function. This is in analogy with the solutions to the Schrödinger equation in a central potential, which gives rise to a separation between orbital and radial wave functions with independent wave numbers. However, in two dimensions the angular wave functions appear with a single non-degenerate solution followed by pairs of degenerate solutions (1, 2, 2, 2, …). In contrast, the spherical harmonics has two angular coordinates and the s, p, d, orbitals etc. instead has the degeneracies (1, 3, 5, …). First considering this simpler manifestation of this phenomenon can be useful to get a better understanding also of the solutions to the Schrödinger equation for the Hydrogen atom.

Full code

Full code is available in src/main.cpp in the project 2018_11_01 of the Second Tech code package. See the README for instructions on how to build and run.

Direct access to wave function amplitudes and eigenvalues in TBTK

Most recent TBTK release at the time of writing: v1.0.3

The wave functions and corresponding eigenvalues provide complete information about a system, from which other properties can be calculated. In this post we will therefore take a look at how to extract these directly using the Solver::Diagonalizer. In particular, we show how to calculate the energy and probability density for a given state in a two-dimensional rectangular geometry.

Model

The model that we will consider is a simple two-dimensional square lattice with nearest neighbor hopping t = 1, for which the Hamiltonian is

    \[ H = -t\sum_{\langle ij\rangle}c_{i}^{\dagger}c_{j}. \]

Here \langle ij\rangle denotes summation over nearest neighbors.

The Hamiltonian H can either be viewed as the simplest example of a two-dimensional tight-binding model, or as a discretized version of the two-dimensional Schrödinger equation

    \[ H_{S} = \frac{-\hbar^2}{2m}\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right) + V(x,y), \]

with \hbar^2/2mdx^2 = \hbar^2/2mdy^2 = t and V(x, y) = -4. We therefore begin by introducing the following parameters.

//Parameters.
const unsigned int SIZE_X = 20;
const unsigned int SIZE_Y = 20;
double t = 1;
int state = 0;

The last parameter is used to indicate for which state we are going to calculate the energy and probability density.

We next create the model and loop over each site to feed the model with the hopping amplitudes. We could achieve this by for each site adding the hopping amplitudes corresponding to

    \[\begin{aligned} -t&\left(c_{(x+1,y)}^{\dagger}c_{(x,y)} + c_{(x-1,y)}^{\dagger}c_{(x,y)}\right.\\ &+\left.c_{(x,y+1)}^{\dagger}c_{(x,y)} + c_{(x,y-1)}^{\dagger}c_{(x,y)}\right). \end{aligned}\]

However, we note that this is equivalent to adding

    \[ -t\left(c_{(x+1,y)}^{\dagger}c_{(x,y)} + c_{(x,y+1)}^{\dagger}c_{(x,y)}\right) + H.c. \]

at each site since for example -tc_{(x-1,y)}^{\dagger}c_{(x,y)} is the Hermitian conjugate of -tc_{(x,y)}^{\dagger}c_{(x-1,y)}.1 Using the later notation we implement this as follows.

//Create the Model.
Model model;
for(unsigned int x = 0; x < SIZE_X; x++){
	for(unsigned int y = 0; y < SIZE_Y; y++){
		if(x+1 < SIZE_X){
			model <<; HoppingAmplitude(
				-t,
				{x + 1, y},
				{x,     y}
			) + HC;
		}
		if(y+1 < SIZE_Y){
			model << HoppingAmplitude(
				-t,
				{x, y + 1},
				{x, y}
			) + HC;
		}
	}
}
model.construct();

The if statements are added to guard against adding hopping amplitudes beyond the boundary of the system.

Solver

We are now ready to setup and run the solver.

//Setup and run the Solver.
Solver::Diagonalizer solver;
solver.setModel(model);
solver.run();

Extract the eigenvalue and probability density

To extract the eigenvalue and probability density we first setup the PropertyExtractor.

//Setup the PropertyExtractor.
PropertyExtractor::Diagonalizer
    propertyExtractor(solver);

After this we print the energy for the given state by requesting it from the PropertyExtractor.

//Print the eigenvalue for the given state.
Streams::out << "The energy of state "
    << state << " is "
    << propertyExtractor.getEigenValue(state)
    << "\n";

Further, to calculate the probability density we create an array, which we fill with the square of the absolute value of the probability amplitude.

//Calculate the probability density for the
//given state.
Array<double> probabilityDensity(
    {SIZE_X, SIZE_Y}
);
for(unsigned int x = 0; x < SIZE_X; x++){
	for(unsigned int y = 0; y < SIZE_Y; y++){
		//Get the probability amplitude at
		//site (x, y) for the given state.
		complex<double> amplitude
			= propertyExtractor.getAmplitude(
				state,
				{x, y}
			);

		//Calculate the probability density.
		probabilityDensity[{x, y}] = pow(
			abs(amplitude),
			2
		);
	}
}

Finally, we plot the probability density and save it to file.

//Plot the probability density.
Plotter plotter;
plotter.plot(probabilityDensity);
plotter.save("figures/ProbabilityDensity.png");

Results

Below we present the results for the six lowest energy states for a lattice size of 20×20. The states separate into four groups with eigenvalues

    \[ E \in \{-3.9553, -3.8881, -3.82229, -3.7796\}. \]

The second and fourth of these eigenvalues are degenerate with two eigenfunctions per energy. We can understand this grouping by realizing that the solutions to our problem are sinus functions with wave numbers (m, n),2 where for the six lowest states

    \[ (m, n) \in \{(1, 1), (2, 1), (1, 2), (2, 2), (3, 1), (1, 3)\}. \]

Among these (2,1) and (1, 2) as well as (3, 1) and (1, 3) are degenerate.

E = -3.9553

E = -3.88881 (two degenerate states)

E = -3.82229

E = -3.7796 (two degenerate states)


Full code

Full code is available in src/main.cpp in the project 2018_10_27 of the Second Tech code package. See the README for instructions on how to build and run.

Using TBTK to calculate the density of states (DOS) of a 1D, 2D, and 3D square lattice

Most recent TBTK release at the time of writing: v1.0.2

In this post we will walk through how to calculate and plot the density of state (DOS) of a 1D, 2D, and 3D square lattice, using TBTK. To achieve this we will first implement three functions for creating the corresponding models. We will then create a fourth function in which the DOS is calculated for each of the three models.

This example also provides a simple demonstration of how TBTK naturally support the creation of general purpose code, in this case the calculation of the DOS, that can be used independently of the details of the model.

Creating the Model

One dimension

In one dimension the Hamiltonian is given by

    \[ H_{1D} &= -2t\sum_{k_x}\cos(k_x)c_{k_x}^{\dagger}c_{k_x}, \]

where t is a hopping parameter that we will set to one.

To create the model we loop over the summation index and feed each of the hopping amplitudes to the Model. Note how we let the numerical indices run over the positive discrete values 0 to 9999, while the argument of the cos function falls in the interval [-\pi, \pi].

Model createModel1D(){
    //Parameters.
    const int SIZE_X = 10000;
    double t = 1;

    //Create the Model.
    Model model;
    for(int kx = 0; kx < SIZE_X; kx++){
        double KX = 2*M_PI*kx/(double)SIZE_X - M_PI;
        model << HoppingAmplitude(
            -2*t*cos(KX),
            {kx},
            {kx}
        );
    }

    return model;
}

Two dimensions

In two dimension the Hamiltonian is given by

    \[ H_{2D} &= -2t\sum_{k_x,k_y}\left(\cos(k_x) + \cos(k_y)\right)c_{\mathbf{k}}^{\dagger}c_{\mathbf{k}}, \]

where \mathbf{k} = (k_x, k_y).

The corresponding Model is created as follows using a 500\times 500 grid.

Model createModel2D(){
    //Parameters.
    const int SIZE_X = 500;
    const int SIZE_Y = 500;
    double t = 1;

    //Create the Model.
    Model model;
    for(int kx = 0; kx < SIZE_X; kx++){
        for(int ky = 0; ky < SIZE_Y; ky++){
            double KX = 2*M_PI*kx/(double)SIZE_X;
            double KY = 2*M_PI*ky/(double)SIZE_Y;

            model << HoppingAmplitude(
                -2*t*(cos(KX) + cos(KY)),
                {kx, ky},
                {kx, ky}
            );
        }
    }

    return model;
}

Three dimensions

In three dimension the Hamiltonian is given by

    \[ H_{3D} &= -2t\sum_{k_x,k_y,k_z}\left(\cos(k_x) + \cos(k_y) + \cos(k_z)\right)c_{\mathbf{k}}^{\dagger}c_{\mathbf{k}}, \]

where \mathbf{k} = (k_x, k_y, k_z).

The corresponding Model is created as follows using a 200\times 200\times 200 grid.

Model createModel3D(){
    //Parameters.
    const int SIZE_X = 200;
    const int SIZE_Y = 200;
    const int SIZE_Z = 200;
    double t = 1;

    //Create the Model.
    Model model;
    for(int kx = 0; kx < SIZE_X; kx++){
        for(int ky = 0; ky < SIZE_Y; ky++){
            for(int ky = 0; ky < SIZE_Y; ky++){
                double KX = 2*M_PI*kx/(double)SIZE_X;
                double KY = 2*M_PI*ky/(double)SIZE_Y;
                double KZ = 2*M_PI*kz/(double)SIZE_Z;

                model << HoppingAmplitude(
                    -2*t*(cos(KX) + cos(KY) + cos(KZ)),
                    {kx, ky, kz},
                    {kx, ky, kz}
                );
            }
        }
    }

    return model;
}

Calculating the DOS

We are now ready to implement the main function for calculating the DOS. To take advantage of that the Hamiltonian consists of independent blocks for each \mathbf{k} value, we will use the Solver::BlockDiagonalizer. Knowing that the density of states will lie in the interval [-6, 6] for all three Models, we next setup the PropertyExtractor to work on the energy interval [-7, 7] and calculate the DOS. Finally we plot the DOS, using gaussian smoothin with \sigma = 0.05 and a convolution window of 101 energy points and save the results to file.

int main(int argc){
    //Filenames to save the figures as.
    string filenames[3] = {
        "figures/DOS_1D.png",
        "figures/DOS_2D.png",
        "figures/DOS_3D.png",
    };

    //Loop over 1D, 2D, and 3D.
    for(int n = 0; n < 3; n++){
        //Create the Model.
        Model model;
        switch(n){
        case 0:
            model = createModel1D();
            break;
        case 1:
            model = createModel2D();
            break;
        case 2:
            model = createModel3D();
            break;
        default:
            Streams::out << "Error: Invalid case value.\n";
            exit(1);
        }

        //Setup and run the Solver.
        Solver::BlockDiagonalizer solver;
        solver.setModel(model);
        solver.run();

        //Setup the PropertyExtractor.
        PropertyExtractor::BlockDiagonalizer propertyExtractor(solver);
        propertyExtractor.setEnergyWindow(-7, 7, 1000);
        Property::DOS dos = propertyExtractor.calculateDOS();

        //Normalize the DOS.
        for(unsigned int c = 0; c < dos.getResolution(); c++)
                dos(c) = dos(c)/model.getBasisSize();

        //Plot and save the result.
        Plotter plotter;
        plotter.setLabeX("Energy");
        plotter.setLabeY("DOS");
        plotter.plot(dos, 0.05, 1000);
        plotter.save(filenames[n]);
    }

    return 0;
}

Results

Below we show the result of the calculation for the three different cases.

One dimensional

Two dimensional

Three dimensional

Full code

Full code is available in src/main.cpp in the project 2018_10_23 of the Second Tech code package. See the README for instructions on how to build and run.

TBTK v1.0.0 released

For the past few years a growing collection of data structures and algorithms tailored for quantum mechanics has been developed under the name TBTK. The aim is to significantly simplify the development of applications that perform quantum mechanical calculations. With particular focus on providing well thought through and efficient general purpose data structures for problems formulated on second-quantized form, the goal is to enable the scientific community to write codes that are much easier to integrate with each other. This is something that will be essential for bringing on the second quantum revolution.

Documentation and preprint

In preparation of this release, significant effort has gone into documenting the code and making sure that it is as easy as possible to get started. As part of this documentation effort, a preprint has recently been published on arXiv. A quick introduction is also available in the README on the projects GitHub page and in the blog post Introduction to TBTK – A software development kit for quantum mechanical calculations. Further, a much more extensive manual and API can be built from the source or accessed immediately at second-quantization.com.

The significance of v1.0.0

The release of v1.0.0 marks an important point where the code has stabilized enough that a more structured development process will be adopted. In particular this means that changes that break the API will only be allowed in new versions of TBTK where the major version number is incremented. That is, developers can from now on be confident that code written in v1.0.0 will be compatible with all future releases with a version number v1.x.y. The second version number will be incremented if significant extensions are added, while the last version number is reserved for bug fixes etc.

Using TBTK to time C++ code

Most recent TBTK release at the time of writing: v0.9.6

When developing computationally intensive algorithms, it is important to be able to time the execution to find the main bottlenecks. TBTK therefore provides a flexible Timer class that allows for simple profiling of C++ code.

Include file

The Timer is made available through

#include "TBTK/Timer.h"

Stacked tick-tock calls

In its basic use case the Timer acts as a time stamp stack that allows a single time stamp to be pushed onto the stack through the call

Timer::tick("Optional tag.");

The latest time stamp can similarly be popped from the stack using

Timer::tock();

When a time stamp is popped, the time difference between the current time and the time that the time stamp was pushed onto the stack is printed to the standard output. In addition, the current stack depth and the optional tag is also printed. This allows for easy identification of which part of the code a specific time difference corresponds to.

In the following example nested tick-tock calls are used to simultaneously measure the total calculation time of a nested loop as well as the time taken by a single iteration of the outer loop.

Timer::tick("Full calculation.")
double sum = 0;
for(unsigned int i = 0; i < 10; i++){
    Timer::tick("Single iteration of the outer loop.");
    for(unsigned int j = 0; j < 10000000; j++)
        sum += sqrt(i*j);
    Timer::tock();
}
Timer::tock();

A typical output is

(1) 23ms 356us 578ns 	Single iteration of the outer loop.
(1) 22ms 121us 595ns 	Single iteration of the outer loop.
(1) 20ms 894us 633ns 	Single iteration of the outer loop.
(1) 20ms 693us 151ns 	Single iteration of the outer loop.
(1) 20ms 753us 656ns 	Single iteration of the outer loop.
(1) 20ms 632us 406ns 	Single iteration of the outer loop.
(1) 20ms 693us 43ns 	Single iteration of the outer loop.
(1) 20ms 751us 261ns 	Single iteration of the outer loop.
(1) 20ms 420us 362ns 	Single iteration of the outer loop.
(1) 20ms 753us 927ns 	Single iteration of the outer loop.
(0) 211ms 272us 43ns 	Full calculation.

We can see that each iteration takes 20-23ms to execute while the full calculation takes 211ms. On the left we also see the stack depth, while the tag is displayed to the right.

As a second example we consider the following recursive function call

double recursive(int depth){
        Timer::tick("Recursive step " + to_string(depth));

        double answer;
        if(depth == 10){
                answer = 0;
        }
        else{
                double sum = 0;
                for(unsigned int i = 0; i < 10000000; i++)
                        sum += sqrt(depth*i);
                answer = sum + recursive(depth+1);
        }

        Timer::tock();

        return answer;
}

Each time the recursive function is entered, a new time stamp is pushed onto the stack. After ten consecutive function calls the function begins to return and the time stamp stack is unwinded. The resulting output is

(10) 501ns           	Recursive step 10
(9) 62ms 570us 861ns 	Recursive step 9
(8) 125ms 299us 748ns 	Recursive step 8
(7) 187ms 827us 863ns 	Recursive step 7
(6) 250ms 704us 696ns 	Recursive step 6
(5) 313ms 284us 290ns 	Recursive step 5
(4) 376ms 373us 532ns 	Recursive step 4
(3) 439ms 518us 286ns 	Recursive step 3
(2) 502ms 852us 691ns 	Recursive step 2
(1) 566ms 298us 567ns 	Recursive step 1
(0) 617ms 420us 578ns 	Recursive step 0

Accumulators

While the stacked mode allows for flexible measurements of multiple pieces of code at the same time, it is not suitable for a case like the following

for(unsigned int i = 0; i < 1000000; i++){
    doA();
    doB();
}

Here both doA() and doB() are executed multiple times and we may want to figure out which one that is the bottleneck. Moreover, measuring doA() and doB() in single shot measurements may not be useful for figuring out the overall performance. For example, doA() and doB() may take widely different times to execute each time and the execution time for doB() may even depend on what the previous call of doA() did, and vice versa. In such cases it is only meaningful to measure the accumulated time taken by each function over the full loop.

For this reason the Timer  also has an accumulator mode. To use this mode we first create two new accumulators.

unsigned int idA = Timer::createAccumulator("Accumulator A");
unsigned int idB = Timer::createAccumulator("Accumulator B");

The accumulators can then be used independently by supplying the corresponding ID to the tick-tock calls.

for(unsigned int i = 0; i < 1000000; i++){
    Timer::tick(idA);
    doA();
    Timer::tock(idA);

    Timer::tick(idB);
    doB();
    Timer::tock(idB);
}

Since tick-tock pairs can be executed millions of times for any given accumulator, no information is printed by the tock calls in this mode. Instead the time accumulated so far can be printed at any time through the call

Timer::printAccumulators();

A typical output looks like

============================== Accumulator table ==============================
ID        Time                                  Tag                                                                                            
[0]                       650ms 618us 220ns     Accumulator A                                                                                       
[1]                    2s 870ms 110us  19ns     Accumulator B                                                                                       
===============================================================================

Note that time is added to the accumulators when the tock calls are made. Therefore, when printing the accumulator table, only completed time intervals contribute to the listed time. We also note that it is possible to use stacked tick-tock calls and accumulator calls simultaneously.

Learn more

For more information about the Timer class, see the corresponding manual page and API entry.