Alexander Fufaev
My name is Alexander FufaeV and here I write about:

Bra-Ket Notation: All You Need to Know

Table of contents
  1. Wave function as a vector in Hilbert space Here you will learn how the wave function from quantum mechanics can be understood as an infinite-dimensional vector living in Hilbert space.
  2. Bra and Ket vectors Here you will learn how bra and ket vectors are defined and how they are related to each other.
  3. Scalar Product and Inner Product in Bra-Ket notation Here you will learn how to use bra-ket notation to write up the overlap integral as the scalar product of bra-ket vectors.
  4. Tensor product in Bra-Ket notation
  5. Basis change with the projection matrices

Wave function as a vector in Hilbert space

Consider any one-dimensional wave function \( \mathit{\Psi}(x)\) describing a quantum mechanical particle. The value of the wave function, say at point \( \class{red}{x_1} \) is \( \mathit{\Psi}(\class{red}{x_1}) \), at point \(\class{green}{x_2}\) is the function value \( \mathit{\Psi}(\class{green}{x_2}) \), at point \(\class{blue}{x_3}\) is the function value \( \mathit{\Psi}(\class{blue}{x_3}) \) and so on. You can assign a function value to each \(x\)-value in this way. We can then represent all function values as a list of values. We can take this list of values to be a column vector \( \mathit{\Psi}\) that lives in an abstract space. The vector then has the components:

We can visualize this vector as in linear algebra (see Ilustration 1, right). The first component \( \mathit{\Psi}(\class{red}{x_1}) \) forms the first coordinate axis, the second component \( \mathit{\Psi}(\class{green}{x_2}) \) the second axis and the third component \( \mathit{\Psi}(\class{blue}{x_3}) \) the third axis. We'll stick with just three components because I can't draw a four-dimensional coordinate system. Each component is assigned a coordinate axis. In this way the three components span a three-dimensional space.

Left: Real wave function \(\Psi(x)\) and its three example function values. Right: Three function values span an approximate coordinate system in which the wave function \(\Psi\) is interpreted as a vector.

As soon as we consider an additional function value \( \mathit{\Psi}(\class{brown}{x_4}) \), the space becomes four-dimensional and so on. We call the vector \( \mathit{\Psi} \) representing a wave function \( \mathit{\Psi}(x) \) a state vector.

Theoretically, of course, there are infinitely many \(x\)-values. Therefore there are also infinitely many associated function values \( \mathit{\Psi}(x) \). If there are infinitely many function values, then the space in which the state vector \( \mathit{\Psi}\) lives is infinite-dimensional. Keep in mind that this space does not have to be an infinite-dimensional position space, but can be any abstract space.

This abstract space in which quantum mechanical state vectors live is called Hilber space. In general, this is an infinite-dimensional vector space. The spin states \( \mathit{\Psi}_{ \uparrow} \) (spin-up) and \( \mathit{\Psi}_{ \downarrow} \) (spin-down) describing a single particle, for example, live in a two-dimensional Hilbert space. That means: the state vectors like the spin-up state \( \mathit{\Psi}_{ \uparrow} \) have only two components.

If you approximate an infinite-dimensional wave function, for example for numerical calculations with the computer, by finitely many function values (there is no other way), the state vector \( \mathit{\Psi}\) will have finitely many components. The more components \(n\) you take, the more accurate the state vector becomes:

If the Hilbert space is finite-dimensional, then a \( \mathit{\Psi}\) vector, such as 2, with \(n\) components, lives in an \(n\) dimensional Hilbert space.

Bra and Ket vectors

So, as you learned, we can represent a quantum mechanical particle in two ways:

  • as a wave function

  • as a state vector

In order to better distinguish the description of the particle as a state vector from the description as a wave function, we write the state vector \( \mathit{\Psi} \) inside an arrow-like bracket:

Wave function \(\mathit{\Psi}(x)\) represented as a column vector is called ket vector \(|\mathit{\Psi}\rangle\) and the arrow-like bracket points to the right. It doesn't matter what you write inside the bracket. For example, you could have written \(|\mathit{\Psi}(x)\rangle\). The only thing you have to keep in mind is that the notation inside the bracket makes clear to the other readers which quantum mechanical system this ket vector 3 represents.

  • So when you see the ket notation \(|\mathit{\Psi}\rangle\), then you know that it means the representation of the particle state as a state vector.

  • On the other hand, if you see \(\mathit{\Psi}(x)\), then you know that it means the representation of the particle state as a wave function.

The vector \(|\mathit{\Psi}\rangle^\dagger\) adjoint to the ket vector is called bra vector. Check out the lesson about Hermitian operators to learn more about adjoint operation. This symbol \(\dagger\) is pronounced as dagger. For a clever, compact notation, we write the bra vector with an inverted arrow: \(|\mathit{\Psi}\rangle^\dagger ~:=~ \langle\mathit{\Psi}|\). Note that 'adjoint' is sometimes also called 'Hermitian adjoint'.

To get the bra vector \( |\mathit{\Psi}\rangle \) adjoint to the ket vector \( \langle\mathit{\Psi}| \), you need to do two operations:

  1. Transpose the ket vector 3. This makes it a row vector:

  2. Complex-conjugate the transposed ket vector. This operation 'adds asterisks' to the components.

What are Bra-Ket vectors?

The wave function \(\mathit{\Psi}\) in the vector representation corresponds to the ket vector \(|\mathit{\Psi}\rangle\) and the row vector \(\langle\mathit{\Psi}|\) adjoint to the ket vector is the bra vector.

Since we have interpreted the wave function \(\mathit{\Psi}\) as a ket vector \(| \mathit{\Psi} \rangle\), we can work with it practically in the same way as with usual vectors you know from mathematics. For example, we can form a scalar product or a tensor product between the bra or ket vectors. The thing that is probably new to you is that the components of the vector can be complex and the number of components can be infinite.

Scalar Product and Inner Product in Bra-Ket notation

You can form the scalar product \(\langle\mathit{\Phi} | \mathit{\Psi} \rangle\) between a bra vector \(\langle\mathit{\Phi} | \) and a ket vector \( | \mathit{\Psi} \rangle \). Here we do not need to include the scalar product point and can omit one vertical line. We write \(\langle\mathit{\Phi} | \mathit{\Psi} \rangle\) instead of \( \langle\mathit{\Phi} | ~\cdot~ | \mathit{\Psi} \rangle \).

If the state vectors between which you form the scalar product live in an infinite dimensional Hilbert space, then we do not call this operation scalar product, but inner product \(\langle\mathit{\Phi} | \mathit{\Psi} \rangle\). However, the bra-ket notation of the inner product remains the same as in the case of the scalar product.

In a finite \(n\)-dimensional Hilbert space, the scalar product \(\langle\mathit{\Phi} | \mathit{\Psi} \rangle\) between an arbitrary Bra vector \(\langle\mathit{\Phi} | \) and a Ket vector \( | \mathit{\Psi} \rangle \) looks like this:

The indices \( \class{red}{1} \), \( \class{green}{2} \), \( \class{blue}{3} \) up to \( n \) at the components are just a short notation for the function values. For exmaple the component \( \mathit{\Psi}_{\class{red}{1}} \) stands for the function value \( \mathit{\Psi}(\class{red}{x_1}) \). You can multiply out the vectors in 6 just as you do with the usual matrix multiplication:

You can write Eq. 7 shorter with a sum sign:

Here \(n\) is the dimension of the Hilbert space, that is the number of components of a state vector living in this Hilbert space. If the dimension \(n\) of the Hilbert space is infinite, then the sum is only an approximation.

If we take two normalized and orthogonal states \( \mathit{\Psi}_{\class{red}{i}} \) and \( \mathit{\Psi}_{\class{blue}{j}} \) and give them variable indices instead of fixed values, then their scalar product \( \langle \mathit{\Psi}_{\class{red}{i}} | \mathit{\Psi}_{\class{blue}{j}} \rangle \) gives either 0 or 1. You know this property from linear algebra when you form the scalar product of two basis vectors:

  • Scalar product 9 of two different orthonormal states, \( \class{red}{i} \neq \class{blue}{j} \), yields: \( \langle\mathit{\Phi}_{ \class{red}{i}} | \mathit{\Psi}_{\class{blue}{j}} \rangle = 0 \).

  • Scalar product 9 of two equal, orthonormal states, \( \class{red}{i} = \class{blue}{j} \), yields: \( \langle\mathit{\Phi}_{ \class{red}{i}} | \mathit{\Psi}_{\class{blue}{j}} \rangle = 1 \).

These two cases can be combined in a single equation using the Kronecker delta \( \delta_{\class{red}{i}\class{blue}{j}} \):

The scalar product 9 with the sum sign is not exact for states from the infinite-dimensional Hilbert space, because we would just omit many function values between \( \class{red}{x_1} \) and \( \class{blue}{x_2} \).With infinite-dimensional states we must switch to an integral. Therefore we replace the sum sign by an integral sign. Of course, we now consider the function values \( \Phi_{ \class{red}{i}} \) and \( \Psi_{ \class{red}{i}} \) not at discrete points \( x_{ \class{red}{i}} \), but at all points \(x\):

To calculate the inner product of two states \( \langle\mathit{\Phi} | \) and \( | \mathit{\Psi} \rangle \), we need to calculate the integral 10.

What does this inner product (or a scalar product) actually mean? The inner product, like a scalar product, is a number that measures how much two states overlap:

  • If the inner product of two normalized states is \( \langle\class{blue}{\mathit{\Phi}} | \class{red}{\mathit{\Psi}} \rangle = 1 \), then the corresponding wave functions \( \class{blue}{\mathit{\Phi}} \) and \( \class{red}{\mathit{\Psi}} \) lie exactly on top of each other. They are equal.

  • If the inner product of two normalized states is \( \langle\class{blue}{\mathit{\Phi}} | \class{red}{\mathit{\Psi}} \rangle = 0 \), then the wave functions \( \class{blue}{\mathit{\Phi}} \) and \( \class{red}{\mathit{\Psi}} \) do not overlap at all.

  • All values of the inner product \( \langle\class{blue}{\mathit{\Phi}} | \class{red}{\mathit{\Psi}} \rangle \) between 1 and 0 result in a partial overlap of the two states.

Overlap of two one-dimensional real, normalized wave functions to illustrate the inner product.

Tensor product in Bra-Ket notation

Another important operation between a bra and a ket vector is the tensor product (or more precisely outer product): \(|\mathit{\Phi} \rangle ~\otimes~ \langle\mathit{\Psi} |\). We can omit the tensor symbol \(\otimes\) because it is immediately clear from the bra-ket notation that it is not a scalar or inner product: \(|\mathit{\Phi} \rangle \langle\mathit{\Psi} |\). The bra and ket vectors are swapped here.

The result of the tensor product is a matrix. If the states \(|\mathit{\Phi} \rangle \) and \( |\mathit{\Psi} \rangle \) each have only three components then we get a 3x3 matrix:

You will encounter such matrices (in form of density matrices) very often in quantum mechanics, for example when learning about quantum entanglement.

If we take a normalized state \( |\mathit{\Psi} \rangle \), that is, the magnitude of this vector is 1, and form a tensor product of this state with itself, we get a projection matrix \(|\mathit{\Psi} \rangle \langle\mathit{\Psi} | \) (or projection operator if no concrete components are considered):

When we apply it to any ket vector, we multiply a matrix \( |\mathit{\Psi} \rangle \langle\mathit{\Psi} | \) by a column vector \( |\mathit{\Phi} \rangle \):

The special feature of a projection matrix is: It projects the state \( | \mathit{\Phi}\rangle \) onto the state \( | \mathit{\Psi}\rangle \). In other words, it gives the part of the wave function \( \mathit{\Phi} \) that overlaps with the wave function \( \mathit{\Psi} \). The result of the projection is thus a ket vector \( | \mathit{\Psi} \rangle \langle \mathit{\Psi} | \, \mathit{\Phi}\rangle \) describing the overlap of the wavefunctions \( \mathit{\Phi} \) and \( \mathit{\Psi} \). Projection matrices are thus an important tool in theoretical physics to study the overlap of quantum states.

Basis change with the projection matrices

Probably the most important use of projection matrices is the very simple change of basis. If we have some quantum state \( |\mathit{\Phi}\rangle \) and we want to look at it from a different perspective, or mathematically speaking, represent it in a different basis, then of course the first thing we do is choose the desired basis: \( \{ |\mathit{\Psi}_{\class{red}{i}}\rangle \} \). This is, as you hopefully know from linear algebra, a set of orthonormal vectors \( |\mathit{\Psi}_{\class{red}{1}}\rangle \), \( |\mathit{\Psi}_{\class{red}{2}}\rangle \), \( |\mathit{\Psi}_{\class{red}{3}}\rangle \) and so on. Their number is equal to the dimension of the Hilbert space in which these vectors live.

For the sake of demonstration, let us assume that our desired basis consists of only three basis vectors: \( \{ |\mathit{\Psi}_{\class{red}{1}}\rangle, |\mathit{\Psi}_{\class{red}{2}}\rangle, |\mathit{\Psi}_{\class{red}{3}}\rangle \} \). With each of these basis vectors, we can construct projection matrices: \( |\mathit{\Psi}_{\class{red}{1}}\rangle\langle \mathit{\Psi}_{\class{red}{1}}| \), \( |\mathit{\Psi}_{\class{red}{2}}\rangle\langle \mathit{\Psi}_{\class{red}{2}}| \) and \( |\mathit{\Psi}_{\class{red}{3}}\rangle\langle \mathit{\Psi}_{\class{red}{3}}| \). To represent the quantum state \( |\mathit{\Phi}\rangle \) in this basis, we next form the sum of the projection matrices:

As we know from mathematics, the sum of the projection matrices of states forming a basis is a unit matrix \( I \). The fact that the sum results in a unit matrix is very important when changing the basis, because we do not want to change the quantum state |\( |\mathit{\Phi}\rangle \). A unit matrix multiplied by a column vector \( |\mathit{\Phi}\rangle \) does not change this vector:

Now we substitute the sum of the basis projection matrices 14 into the unit matrix in 15:

The resulting state \( |\mathit{\Phi}\rangle \), while notated identically to the state before the basis change, is now represented in the basis \( \{ |\mathit{\Psi}_{\class{red}{1}}\rangle, |\mathit{\Psi}_{\class{red}{2}}\rangle, |\mathit{\Psi}_{\class{red}{3}}\rangle \} \). We can also, for example, if we want to emphasize the new basis, give it an index \( \Psi \): \( |\mathit{\Phi}\rangle_{\Psi} \). I hope you now understand how useful the concent of projection matrices is!

In general, we can write the basis change into a basis with \(n\) basis vectors \( \{ |\mathit{\Psi}_{\class{red}{i}}\rangle \} \) by simply replacing the number 3 at the summation sign in 14 with \(n\):

The basis change with a finite number of basis vectors is of course exact only for states \( |\mathit{\Phi}\rangle \) and \( |\mathit{\Psi}\rangle \) living in finite-dimensional Hilbert spaces. For states with infinitely many components, \(n\) is infinite and the state \( |\mathit{\Phi}\rangle_{\Psi} \) is only an approximation in the new basis. The approximation becomes more accurate the larger \(n\) we choose. But how does the basis change work for states with infinitely many components? With an integral! For this, replace the discrete summation with a sum sign by a continuous summation with an integral:

Now you should have a solid basic knowledge of bra-ket notation:

  • What bra and ket vectors are.

  • How you use it to form scalar product and inner product.

  • How you construct projection matrices with it.

  • How to perform a basis change with projection matrices in bra-ket notation.

In the next lesson, you'll learn about the operators used in quantum mechanics, namely the Hermitian operators - in Bra-Ket notation, of course.