Alexander Fufaev
My name is Alexander FufaeV and here I write about:

Kronecker Delta: 4 Important Rules and Scalar Product in Index Notation

It is impossible to imagine theoretical physics without the Kronecker delta. You will encounter this relatively simple, yet powerful tensor practically in all fields of theoretical physics. For example, it is used to...

  • write long expressions more compactly.
  • simplify complicated expressions.

In combination with the Levi-Civita tensor, the two tensors are very powerful! That's why it's worth understanding how the Kronecker delta works.

Definition and Examples

Kronecker delta \(\delta_{ \class{blue}{i} \class{red}{j} }\) - is a small greek letter delta, which yields either 1 or 0, depending on which values its two indices \(\class{blue}{i} \) and \(\class{red}{j}\) take on. The maximal value of an index corresponds to the considered dimension, so in three-dimensional space \(\class{blue}{i} \) and \(\class{red}{j}\) run from 1 to 3.

Kronecker delta is equal to 1 if \( \class{blue}{i} \) and \(\class{red}{j}\) are equal. And Kronecker delta is equal to 0 if \(\class{blue}{i} \) and \(\class{red}{j}\) are unequal.

1
Definition: Kronecker delta
\delta_{ \class{blue}{i} \class{red}{j} } ~=~ \begin{cases} 1, &\mbox{} \class{blue}{i}=\class{red}{j} \\
0, &\mbox{} \class{blue}{i} \neq \class{red}{j} \end{cases}
0
Examples

  • \( \delta_{11} ~=~ 1 \) - because both indices are the same.
  • \( \delta_{23} ~=~ 0 \) - because both indices are different.
  • \( a \,\delta_{33} ~=~ a \cdot 1 ~=~ a \)
  • \( \delta_{23} \, \delta_{22} ~=~ 1 \cdot 0 ~=~ 0 \)

Einstein's Summation Convention

In order to represent an expression like

0
Summation with sum sign over an index in the product of two Kronecker deltas
\underset{\class{red}{j}~=~1}{\overset{3}{\boxed{+}}} ~ \delta_{\class{blue}{i}\class{red}{j}} \, \delta_{\class{red}{j}\class{green}{k}} ~=~ \delta_{\class{blue}{i}\class{red}{1}} \, \delta_{\class{red}{1}\class{green}{k}} ~+~ \delta_{\class{blue}{i}\class{red}{2}} \, \delta_{\class{red}{2}\class{green}{k}} ~+~ \delta_{\class{blue}{i}\class{red}{3}} \, \delta_{\class{red}{3}\class{green}{k}}
0

or an expression like

0
Summation with sum sign over an index in the product of two vector components
\underset{\class{blue}{i}~=~1}{\overset{3}{\boxed{+}}} \, a_{\class{blue}{i}} \, b_{\class{blue}{i}} ~=~ a_{\class{blue}{1}}\,b_{\class{blue}{1}} ~+~ a_{\class{blue}{2}}\,b_{\class{blue}{2}} ~+~ a_{\class{blue}{3}}\,b_{\class{blue}{3}}
0

compactly, we agree on the following rule:

Summation convention

We omit the sum sign, but keep in mind that if two equal indices appear in an expression, then we sum over that index.

Example

In the following scalar product we sum over \(i\):

0
Summation with sum sign over the index i in the product of two vector components
\underset{\class{blue}{i}~=~1}{\overset{3}{\boxed{+}}} \, a_{\class{blue}{i}} \, b_{\class{blue}{i}} ~=~ a_{\class{blue}{1}}\,b_{\class{blue}{1}} ~+~ a_{\class{blue}{2}}\,b_{\class{blue}{2}} ~+~ a_{\class{blue}{3}}\,b_{\class{blue}{3}}

We omit the sum sign and keep in mind that we sum over \(i\). Thus, using Einstein's summation convention, the scalar product is:

0
Summation without sum sign via index i in the product of two vector components
a_{\class{blue}{i}} \, b_{\class{blue}{i}} ~=~ a_{\class{blue}{1}}\,b_{\class{blue}{1}} ~+~ a_{\class{blue}{2}}\,b_{\class{blue}{2}} ~+~ a_{\class{blue}{3}}\,b_{\class{blue}{3}}

Another advantage of the sum convention (in addition to compactness) is formal commutativity. For example, you may write down the expression \( \varepsilon_{ijk} \, \boldsymbol{\hat{e}}_{i} \, s_j \, \delta_{km} \) in a different order as you wish, for example like this: \( \varepsilon_{ijk} \, \delta_{km} \, \, s_j \, \boldsymbol{\hat{e}}_{i} \). This might help you see what can be shortened or simplified further.

But be careful! There are exceptions, for example with the differential operator \( \partial_j \), which acts on a successor.

0
Derivative operator does not commutate in index notation
f_{\class{red}{j}} \, \partial_{\class{red}{j}} ~\neq~ \partial_{\class{red}{j}} \, f_{\class{red}{j}}
0

You can' t move something before the derivative that is supposed to be differentiated. So you should be careful with operators in index notation.

4 Rules for Kronecker delta

Let's look at four useful calculation rules with Kronecker delta that you can use whenever summing over double indices.

Rule #1

Indices \( ij \) may be interchanged:

0
Kronecker delta is symmetric
\delta_{\class{blue}{i}\class{red}{j}} ~=~ \delta_{\class{red}{j}\class{blue}{i}}

Why is that?: According to the definition, if the indices \(i\) and \(j\) are equal, then \( \delta_{ij} \) is equal to 1. But then, also \(\delta_{ji} \) is equal to 1. And if the indices are unequal, you have: \( \delta_{ij} \) is equal to zero. And \(\delta_{ji} \) is equal to zero. So as you can see: Kronecker delta is symmetric!

Rule #2

If the product of two or more Kronecker deltas contains a summation index \( j \), then the product can be shortened, such that the summation index \( j \) disappears:

0
Index contraction in the product of two Kronecker deltas
\delta_{\class{blue}{i}\class{red}{j}} \, \delta_{\class{red}{j}\class{green}{k}} ~=~ \delta_{\class{blue}{i}\class{green}{k}}

Why is that?: Let's consider for example the case where the indices \( i \) and \( j \) are equal: \( i = j \) and the indices \( j \) and \( k \) are not equal: \( j \neq k \). Then it follows that \( i \) and \( k \) must also be unequal: \( i \neq k \). So \(\delta_{jk}\) is zero and therefore the whole term on the left hand side is zero: \( \delta_{ij} \, \delta_{jk} ~=~ 0 \). \(\delta_{ik}\) on the left-hand side is also zero, because \(i\) and \(k\) are different. The equation is fulfilled. You can proof all other possible cases in the same way. So instead of writing two deltas you can just write \( \delta_{ik} \). We say: The summation index \(j\) is contracted.

Example: Apply rule #2
0
Example how to combine two Kronecker deltas
\delta_{\class{green}{k}\class{violet}{m}} \, \delta_{\class{violet}{m}n} ~=~ \delta_{\class{green}{k}n}

The summation index here is \(m\), so you can eliminate it by contracting it.

Example: Contraction order
0
Sum up product of three Kronecker deltas
\delta_{\class{blue}{i}\class{red}{j}} \, \delta_{\class{green}{k}\class{red}{j}} \, \delta_{\class{blue}{i}n} ~=~ \delta_{\class{green}{k}n}

Here you have two summation indices \(i\) and \(j\). So in principle you can eliminate both of them.

First possible way of contraction: From the first rule you know, that Kronecker delta is symmetric, so you can swap \(k\) and \(j\) in \(\delta_{kj}\) and then contract the index \(j\) first. You get: \(\delta_{ik} \, \delta_{in}\). And then you contract the index \(i\). The simplified result is \( \delta_{kn} \).

Second possible way of contraction: Firs reorder the product to \( \delta_{kj} \, \delta_{ij} \, \delta_{in} \). Contract the summation index \(i\) first. You get: \( \delta_{kj} \, \delta_{jn} \). Contract the second summation index \(j\). You get: \( \delta_{kn} \).

Remember that the contraction order is not important here. In both cases you get the same result. So which way of simplification you take doesn't matter!

Rule #3

If the index in \( a_j \) also occurs in Kronecker delta \( \delta_{jk} \), then the Kronecker delta disappears and the factor \( a_j \) gets the other index \(k\):

0
Index contraction in the product of Kronecker delta and one factor
a_{\class{red}{j}} \, \delta_{\class{red}{j}\class{green}{k}} ~=~ a_{\class{green}{k}}

Why is that? This rule is basically another case of index contraction. This rules tells you that you can also contract summation indices that don't have to be carried by a Kronecker delta.

Example: Apply rule #3
0
Example how to combine Kronecker delta with one factor
\Gamma_{\class{red}{j}\class{violet}{m}\class{green}{k}} \, \delta_{n\class{green}{k}} ~=~ \Gamma_{\class{red}{j}\class{violet}{m}n}

Rule #4

If \( j \) runs from 1 to \(n\), then:

0
\delta_{\class{red}{j}\class{red}{j}} ~=~ n

Why is that?: According to the summation convention 5, the summation is carried out over \(j\) here. So \(\delta_{jj}\) is equal to \(\delta_{11}\) plus \(\delta_{22}\) plus \(\delta_{33}\) and so on up to \(n\). And each Kronecker delta yields 1, because the index values are equal. So 1 + 1 + 1 and so on, results in \(n\):

0
Write out Kronecker delta with two equal indices
\delta_{\class{red}{j}\class{red}{j}} &~=~ \delta_{\class{red}{11}} ~+~ \delta_{\class{red}{22}} ~+~...~+~ \delta_{\class{red}{nn}} \\\\
&~=~ 1 ~+~ 1 ~+~...~+~ 1 \\\\
& ~=~ n
0
3d example

If \( \class{red}{j} \) takes the values from 1 to \(3\). So 1 + 1 + 1 and so on, results in \(3\):

0
\delta_{\class{red}{j}\class{red}{j}} &~=~ \delta_{\class{red}{11}} ~+~ \delta_{\class{red}{22}} ~+~ \delta_{\class{red}{33}} \\\\
&~=~ 1 ~+~ 1 ~+~ 1 \\\\
& ~=~ 3

The 3 most common mistakes you should avoid making

If you use summation convention and the above rules, you must also pay attention to the correct notation: Summation is done if an index occurs exactly twice on one side of the equation.

  1. 0
    Mistake #1
    v_{\class{blue}{i}} ~=~ b_{\class{blue}{i}} \, c_{\class{blue}{i}}
    0

    Why? Because on the right hand side you have double index \(i\), so you sum over \( i \). But on the left hand side there is also the summation index \(i\) - and this of course makes no sense... To correct this expression, you can rename the summation index to \( j \).

  2. This expression is not formally wrong but very prone to errors:

    0
    Mistake #2
    \delta_{\class{red}{j}\class{green}{k}} \, v_{\class{green}{k}} ~=~ \delta_{\class{violet}{m}\class{green}{k}} \, r_{\class{green}{k}}
    0

    Because the summation index \( k \) appears on both sides of the equation. To correct this, rename one of the summation indices, for example to \(j\).

  3. 0
    Mistake #3
    \delta_{\class{blue}{i}1} \, \delta_{1\class{green}{k}} ~=~ \delta_{\class{blue}{i}\class{green}{k}}
    0

    Why? Because here you try to sum over an ordinary number, as if this number would be a summation index. Number "1" occurs twice, but it is not a variable index over which you can sum. Therefore you can not use the contraction rule on ordinary numbers.

Scalar product with Kronecker delta

Consider a three-dimensional vector with the components \(x\), \(y\) and \(z\):

0
Column vector with three components
\boldsymbol{v} ~=~ \begin{bmatrix}x \\ y \\ z \end{bmatrix}
0

You can represent this vector \( \boldsymbol{v} \) in an orthonormal basis as follows:

0
Vector as linear combination of basis vectors
\boldsymbol{v} ~=~ x \, \boldsymbol{\hat{e}}_x ~+~ y \, \boldsymbol{\hat{e}}_y ~+~ z \, \boldsymbol{\hat{e}}_z

0
Three basis vectors span an orthogonal coordinate system.

Here \(\boldsymbol{\hat{e}}_x\), \(\boldsymbol{\hat{e}}_y\) and \(\boldsymbol{\hat{e}}_z\) are three basis vectors which are orthogonal to each other and normalized. In this case they span an orthogonal three-dimensional coordinate system.

The Kronecker delta needs vectors written in index notation. Here we do not denote the vector components with different letters \(x,y,z\), but we choose one letter (here the letter \( v \)) and then number the vector components consecutively. The vector components are then called \(v_1\), \(v_2\) and \(v_3\) and the basis expansion looks like this:

0
Vector as linear combination of basis vectors in other notation
\boldsymbol{v} &~=~ \left( v_1,v_2,v_3\right) \\\\
&~=~ v_1 \, \boldsymbol{\hat{e}}_1 ~+~ v_2 \, \boldsymbol{\hat{e}}_2 ~+~ v_3 \, \boldsymbol{\hat{e}}_3
0

One of the advantages of index notation is that this way you will never run out of letters for the vector components. Just imagine a fifty-dimensional vector. There aren't even that many letters to give each component \(v_1\), \(v_2\) ... \(v_{50}\) of the vector a unique letter! It gets worse when you want to write out this fifty-dimensional vector as in 21....

Another advantage of the index notation is that by numbering the vector components in this way, you can use the sum sign to represent 21 more compactly. It becomes even more compact if we omit the big sum sign according to the summation convention. Look how compact the vector \(\boldsymbol{v}\) can be represented in a basis:

1
Vector in index notation
\boldsymbol{v} ~=~ v_{\class{red}{j}} \, \boldsymbol{\hat{e}}_{\class{red}{j}}
0

Here, as you know, we sum over index \(j\). Whether you call the index \(j\), \(i\) or \(k\) or any other letter is of course up to you.

Now that you know how a vector is represented in index notation, we can analogously write the scalar product \( \boldsymbol{a} ~\cdot~ \boldsymbol{b} \) of two vectors \( \boldsymbol{a} = (a_1, a_2, a_3) \) and \( \boldsymbol{b} = (b_1, b_2, b_3)\) in index notation. For this we use the index representation of a vector 22:

0
Scalar product of two vectors in index notation
\boldsymbol{a} ~\cdot~ \boldsymbol{b} ~=~ a_{\class{blue}{i}} \, \boldsymbol{\hat{e}}_{\class{blue}{i}} ~\cdot~ b_{\class{red}{j}} \, \boldsymbol{\hat{e}}_{\class{red}{j}}
0

In index notation, you may sort the factors in 23 as you like. This is the advantage of index notation, where the commutative law applies. Let's take advantage of that and put parentheses around the basis vectors to emphasize their importance in introducing the Kronecker delta:

0
Scalar product of two vectors in index notation with factored out basis vectors
\boldsymbol{a} ~\cdot~ \boldsymbol{b} ~=~ a_{\class{blue}{i}} \, b_{\class{red}{j}} \, \left( \boldsymbol{\hat{e}}_{\class{blue}{i}} ~\cdot~ \boldsymbol{\hat{e}}_{\class{red}{j}} \right)
0

The basis vectors \(\boldsymbol{\hat{e}}_i\) and \(\boldsymbol{\hat{e}}_j\) are orthonormal (i.e. orthogonal and normalized). Recall what the property of being orthonormal means for two vectors. Their scalar product \( \boldsymbol{\hat{e}}_{i} ~\cdot~ \boldsymbol{\hat{e}}_{j} \) yields:

0
Scalar product of two basis vectors in index notation
\boldsymbol{\hat{e}}_{\class{blue}{i}} ~\cdot~ \boldsymbol{\hat{e}}_{\class{red}{j}} ~=~ \begin{cases} 1, &\mbox{} \class{blue}{i}=\class{red}{j} \\
0, &\mbox{} \class{blue}{i} \neq \class{red}{j} \end{cases}
0

Doesn't this property look familiar to you? The scalar product of two orthonormal vectors behaves exactly like Kronecker delta!

0
Definition of Kronecker delta is like scalar product of two basis vectors in index notation
\delta_{\class{blue}{i}\class{red}{j}} ~=~ \begin{cases} 1, &\mbox{} \class{blue}{i}=\class{red}{j} \\
0, &\mbox{} \class{blue}{i} \neq \class{red}{j} \end{cases}
0

Therefore replace the scalar product of two basis vectors with a Kronecker delta:

0
Scalar product of two orthonormal vectors expressed with Kronecker delta
\boldsymbol{\hat{e}}_{\class{blue}{i}} ~\cdot~ \boldsymbol{\hat{e}}_{\class{red}{j}} ~=~ \delta_{\class{blue}{i}\class{red}{j}}
0

Thus we can write the scalar product 27 using Kronecker delta:

1
Scalar product with Kronecker delta
\boldsymbol{a} ~\cdot~ \boldsymbol{b} ~=~ a_{\class{blue}{i}} \, b_{\class{red}{j}} \, \delta_{\class{blue}{i}\class{red}{j}}

0

If you remember rule #3, you can contract the index \(j\) if you want:

0
Scalar product with eliminated Kronecker delta
\boldsymbol{a} ~\cdot~ \boldsymbol{b} ~=~ a_{\class{blue}{i}} \, b_{\class{blue}{i}}
0

And you get exactly the definition of the scalar product, where the vector components are summed component-wise:

0
Scalar product of two vectors written out in index notation
\boldsymbol{a} ~\cdot~ \boldsymbol{b} ~=~ a_{\class{blue}{1}} \, b_{\class{blue}{1}} ~+~ a_{\class{blue}{2}} \, b_{\class{blue}{2}} ~+~ a_{\class{blue}{3}} \, b_{\class{blue}{3}}
0
Check: Write out the sum

We can write out the double summation over \(i\) and \(j\) in 28 for practice. In other words, we have to go through all possible combinations of the indices \(i\) and \(j\):

0
Sum of two vectors with Kronecker delta written out in index notation
a_{\class{blue}{i}} \, b_{\class{red}{j}} \, \delta_{\class{blue}{i}\class{red}{j}} &~=~ a_1 \, b_1 \, \delta_{11} ~+~ a_1 \, b_2 \, \delta_{12} ~+~ a_1 \, b_3 \, \delta_{13} \\\\
&~+~ a_2 \, b_1 \, \delta_{21} ~+~ a_2 \, b_2 \, \delta_{22} ~+~ a_2 \, b_3 \, \delta_{23} \\\\
& ~+~ a_3 \, b_1 \, \delta_{31} ~+~ a_3 \, b_2 \, \delta_{32} ~+~ a_3 \, b_3 \, \delta_{33}

As you can see - because of the definition of Kronecker delta - only 3 components of 9 in total are not zero, where \( i = j \). So you may omit all summands with unequal indices:

0
Kronecker delta with different indices yields 0 in the sum of two vectors in index notation
a_{\class{blue}{i}} \, b_{\class{blue}{i}} ~=~ a_1 \, b_1 \, \delta_{11} ~+~ a_2 \, b_2 \, \delta_{22} ~+~ a_3 \, b_3 \, \delta_{33}

Using the definition of Kronecker delta, \( \delta_{11} ~=~ \delta_{22} ~=~ \delta_{33} ~=~ 1 \), you get the scalar product you are familiar with:

0
Kronecker delta with equal indices yields 1 in the sum of two vectors in index notation
a_{\class{blue}{i}} \, b_{\class{blue}{i}} ~=~ a_1 \, b_1 ~+~ a_2 \, b_2 ~+~ a_3 \, b_3
Example: Kronecker delta in quantum mechanics

To take an example from quantum mechanics, where you will encounter the Kronecker delta. The spin states \(|1\rangle\) (spin up) and \(|2\rangle\) (spin down) are orthonormal to each other, which means they satisfy the following conditions:

0
Scalar product of two different spin states yields 0
\langle 1 | 2\rangle ~=~ 0, ~~~ \langle 2 | 1 \rangle ~=~ 0
0
Scalar product of two equal spin states yields 1
\langle 1 | 1 \rangle ~=~ 1, ~~~ \langle 2 | 2 \rangle ~=~ 1

These four equations can be combined into a single equation using the Kronecker delta \(\delta_{ij}\):

0
Scalar product of two equal spin states using Kronecker delta
\langle \class{blue}{i} | \class{red}{j} \rangle ~=~ \delta_{\class{blue}{i}\class{red}{j}}

Exercises with Solutions

Use this formula eBook if you have problems with physics problems.

Exercise: Simplify Terms Using Kronecker Delta

Simplify the following expressions using the rules for calculating with Kronecker delta:

  1. \(\delta_{31}\,\delta_{33}\)
  2. \(\delta_{ji}\,T_{ink}\)
  3. \(\delta_{j1}\,\delta_{ji}\,\delta_{2i}\)
  4. \(\delta_{ik}\,\delta_{i3}\,\delta_{3k}\)
  5. \(\delta_{jj}\) mit \(j ~\in~ \{ 1,2,3,4 \} \)
  6. \(\delta_{k\mu} \, \varepsilon_{kmn} \, \delta_{ss} \) mit \(s ~\in~ \{ 1,2 \} \)

Solution to the exercise #1

Here we simplify the following expression: \begin{align} \delta_{31}\,\delta_{33} ~&=~ 0 \cdot 1 ~=~ 0 \end{align}

We exploited that \( \delta_{31} = 0 \) is because the indices have two different values and \( \delta_{33} = 1 \) has two equal indices.

Solution to the exercise #2

We want to simplify the following expression: \begin{align} \delta_{ji} \, T_{ink} ~&=~ T_{jnk} \end{align}

We exploited the rule that the Kronecker delta \( \delta_{ji} \) eliminates everything except \( T_{jnk} \), where \( i = j \).

Solution to the exercise #3

Here we simplify the following expression: \begin{align} \delta_{j1} \, \delta_{ji} \, \delta_{2i} \end{align}

For example, first combine the index \(j\) in \(\delta_{j1} \, \delta_{ji}\) and then combine the index \(i\): \begin{align} \delta_{j1} \, \delta_{ji} \, \delta_{2i} &~=~ \delta_{i1} \, \delta_{2i} \\\\ &~=~ \delta_{12} \\\\ &~=~ 0 \end{align}

Of course, you could just as well first combine the index \(i\) in \(\delta_{ji} \, \delta_{2i}\): \begin{align} \delta_{j1} \, \delta_{ji} \, \delta_{2i} &~=~ \delta_{j1} \, \delta_{2j} \\\\ &~=~ \delta_{21} \\\\ &~=~ 0 \end{align}

You get the same result.

Solution to the exercise #4

Proceed analogously to (c), except that at the end you get not 0 but 1 according to the Kronecker delta definition: \begin{align} \delta_{ik} \, \delta_{i3} \, \delta_{3k} &~=~ \delta_{k3} \, \delta_{3k} \\\\ &~=~ \delta_{33} \\\\ &~=~ 1 \end{align}

Again, the order of simplification does not matter.

Solution to the exercise #5

Here we simplify the following expression, summing over the index \( j \) up to 4: \begin{align} \delta_{jj} \end{align} We get a sum with four summands: \begin{align} \delta_{jj} &~=~ \delta_{11} ~+~ \delta_{22} ~+~ \delta_{33} ~+~ \delta_{44} \\\\ &~=~ 1~+~1~+~1~+~1 \\\\ &~=~ 4 \end{align}

Solution to the exercise #6

Here we simplify the following expression, summing over the index \( s \) up to 2: \begin{align} \delta_{k\mu} \, \varepsilon_{kmn} \, \delta_{ss} \end{align}

Let us first combine \( \delta_{k\mu} \, \varepsilon_{kmn} \) and then write out the sum \(\delta_{ss}\): \begin{align} \delta_{k\mu} \, \varepsilon_{kmn} \, \delta_{ss} &~=~ \varepsilon_{\mu mn} \, \delta_{ss} \\\\ &~=~ \varepsilon_{\mu mn} \, \left( \delta_{11} ~+~ \delta_{22} \right) \\\\ &~=~ \varepsilon_{\mu mn} \, \left( 1~+~ 1 \right) \\\\ &~=~ 2\, \varepsilon_{\mu mn} \end{align}