BraKet Notation: All You Need to Know
Table of contents
 Wave function as a vector in Hilbert space Here you will learn how the wave function from quantum mechanics can be understood as an infinitedimensional vector living in Hilbert space.
 Bra and Ket vectors Here you will learn how bra and ket vectors are defined and how they are related to each other.
 Scalar Product and Inner Product in BraKet notation Here you will learn how to use braket notation to write up the overlap integral as the scalar product of braket vectors.
 Tensor product in BraKet notation
 Basis change with the projection matrices
Wave function as a vector in Hilbert space
Consider any onedimensional wave function \( \mathit{\Psi}(x)\) describing a quantum mechanical particle. The value of the wave function, say at point \( \class{red}{x_1} \) is \( \mathit{\Psi}(\class{red}{x_1}) \), at point \(\class{green}{x_2}\) is the function value \( \mathit{\Psi}(\class{green}{x_2}) \), at point \(\class{blue}{x_3}\) is the function value \( \mathit{\Psi}(\class{blue}{x_3}) \) and so on. You can assign a function value to each \(x\)value in this way. We can then represent all function values as a list of values. We can take this list of values to be a column vector \( \mathit{\Psi}\) that lives in an abstract space. The vector then has the components:
We can visualize this vector as in linear algebra (see Ilustration 1, right). The first component \( \mathit{\Psi}(\class{red}{x_1}) \) forms the first coordinate axis, the second component \( \mathit{\Psi}(\class{green}{x_2}) \) the second axis and the third component \( \mathit{\Psi}(\class{blue}{x_3}) \) the third axis. We'll stick with just three components because I can't draw a fourdimensional coordinate system. Each component is assigned a coordinate axis. In this way the three components span a threedimensional space.
As soon as we consider an additional function value \( \mathit{\Psi}(\class{brown}{x_4}) \), the space becomes fourdimensional and so on. We call the vector \( \mathit{\Psi} \) representing a wave function \( \mathit{\Psi}(x) \) a state vector.
Theoretically, of course, there are infinitely many \(x\)values. Therefore there are also infinitely many associated function values \( \mathit{\Psi}(x) \). If there are infinitely many function values, then the space in which the state vector \( \mathit{\Psi}\) lives is infinitedimensional. Keep in mind that this space does not have to be an infinitedimensional position space, but can be any abstract space.
This abstract space in which quantum mechanical state vectors live is called Hilber space. In general, this is an infinitedimensional vector space. The spin states \( \mathit{\Psi}_{ \uparrow} \) (spinup) and \( \mathit{\Psi}_{ \downarrow} \) (spindown) describing a single particle, for example, live in a twodimensional Hilbert space. That means: the state vectors like the spinup state \( \mathit{\Psi}_{ \uparrow} \) have only two components.
If you approximate an infinitedimensional wave function, for example for numerical calculations with the computer, by finitely many function values (there is no other way), the state vector \( \mathit{\Psi}\) will have finitely many components. The more components \(n\) you take, the more accurate the state vector becomes:
If the Hilbert space is finitedimensional, then a \( \mathit{\Psi}\) vector, such as 2
, with \(n\) components, lives in an \(n\) dimensional Hilbert space.
Bra and Ket vectors
So, as you learned, we can represent a quantum mechanical particle in two ways:

as a wave function

as a state vector
In order to better distinguish the description of the particle as a state vector from the description as a wave function, we write the state vector \( \mathit{\Psi} \) inside an arrowlike bracket:
Wave function \(\mathit{\Psi}(x)\) represented as a column vector is called ket vector \(\mathit{\Psi}\rangle\) and the arrowlike bracket points to the right. It doesn't matter what you write inside the bracket. For example, you could have written \(\mathit{\Psi}(x)\rangle\). The only thing you have to keep in mind is that the notation inside the bracket makes clear to the other readers which quantum mechanical system this ket vector 3
represents.

So when you see the ket notation \(\mathit{\Psi}\rangle\), then you know that it means the representation of the particle state as a state vector.

On the other hand, if you see \(\mathit{\Psi}(x)\), then you know that it means the representation of the particle state as a wave function.
The vector \(\mathit{\Psi}\rangle^\dagger\) adjoint to the ket vector is called bra vector. Check out the lesson about Hermitian operators to learn more about adjoint operation. This symbol \(\dagger\) is pronounced as dagger. For a clever, compact notation, we write the bra vector with an inverted arrow: \(\mathit{\Psi}\rangle^\dagger ~:=~ \langle\mathit{\Psi}\). Note that 'adjoint' is sometimes also called 'Hermitian adjoint'.
To get the bra vector \( \mathit{\Psi}\rangle \) adjoint to the ket vector \( \langle\mathit{\Psi} \), you need to do two operations:

Transpose the ket vector
3
. This makes it a row vector:$$ \begin{align} \left[ \mathit{\Psi}(\class{red}{x_1}),~ \mathit{\Psi}(\class{green}{x_2}),~ \mathit{\Psi}(\class{blue}{x_3}),~~... \right] \end{align} $$ 
Complexconjugate the transposed ket vector. This operation 'adds asterisks' to the components.
Since we have interpreted the wave function \(\mathit{\Psi}\) as a ket vector \( \mathit{\Psi} \rangle\), we can work with it practically in the same way as with usual vectors you know from mathematics. For example, we can form a scalar product or a tensor product between the bra or ket vectors. The thing that is probably new to you is that the components of the vector can be complex and the number of components can be infinite.
Scalar Product and Inner Product in BraKet notation
You can form the scalar product \(\langle\mathit{\Phi}  \mathit{\Psi} \rangle\) between a bra vector \(\langle\mathit{\Phi}  \) and a ket vector \(  \mathit{\Psi} \rangle \). Here we do not need to include the scalar product point and can omit one vertical line. We write \(\langle\mathit{\Phi}  \mathit{\Psi} \rangle\) instead of \( \langle\mathit{\Phi}  ~\cdot~  \mathit{\Psi} \rangle \).
If the state vectors between which you form the scalar product live in an infinite dimensional Hilbert space, then we do not call this operation scalar product, but inner product \(\langle\mathit{\Phi}  \mathit{\Psi} \rangle\). However, the braket notation of the inner product remains the same as in the case of the scalar product.
In a finite \(n\)dimensional Hilbert space, the scalar product \(\langle\mathit{\Phi}  \mathit{\Psi} \rangle\) between an arbitrary Bra vector \(\langle\mathit{\Phi}  \) and a Ket vector \(  \mathit{\Psi} \rangle \) looks like this:
The indices \( \class{red}{1} \), \( \class{green}{2} \), \( \class{blue}{3} \) up to \( n \) at the components are just a short notation for the function values. For exmaple the component \( \mathit{\Psi}_{\class{red}{1}} \) stands for the function value \( \mathit{\Psi}(\class{red}{x_1}) \). You can multiply out the vectors in 6
just as you do with the usual matrix multiplication:
You can write Eq. 7
shorter with a sum sign:
Here \(n\) is the dimension of the Hilbert space, that is the number of components of a state vector living in this Hilbert space. If the dimension \(n\) of the Hilbert space is infinite, then the sum is only an approximation.
If we take two normalized and orthogonal states \( \mathit{\Psi}_{\class{red}{i}} \) and \( \mathit{\Psi}_{\class{blue}{j}} \) and give them variable indices instead of fixed values, then their scalar product \( \langle \mathit{\Psi}_{\class{red}{i}}  \mathit{\Psi}_{\class{blue}{j}} \rangle \) gives either 0 or 1. You know this property from linear algebra when you form the scalar product of two basis vectors:

Scalar product
9
of two different orthonormal states, \( \class{red}{i} \neq \class{blue}{j} \), yields: \( \langle\mathit{\Phi}_{ \class{red}{i}}  \mathit{\Psi}_{\class{blue}{j}} \rangle = 0 \). 
Scalar product
9
of two equal, orthonormal states, \( \class{red}{i} = \class{blue}{j} \), yields: \( \langle\mathit{\Phi}_{ \class{red}{i}}  \mathit{\Psi}_{\class{blue}{j}} \rangle = 1 \).
These two cases can be combined in a single equation using the Kronecker delta \( \delta_{\class{red}{i}\class{blue}{j}} \):
The scalar product 9
with the sum sign is not exact for states from the infinitedimensional Hilbert space, because we would just omit many function values between \( \class{red}{x_1} \) and \( \class{blue}{x_2} \).With infinitedimensional states we must switch to an integral. Therefore we replace the sum sign by an integral sign. Of course, we now consider the function values \( \Phi_{ \class{red}{i}} \) and \( \Psi_{ \class{red}{i}} \) not at discrete points \( x_{ \class{red}{i}} \), but at all points \(x\):
To calculate the inner product of two states \( \langle\mathit{\Phi}  \) and \(  \mathit{\Psi} \rangle \), we need to calculate the integral 10
.
What does this inner product (or a scalar product) actually mean? The inner product, like a scalar product, is a number that measures how much two states overlap:

If the inner product of two normalized states is \( \langle\class{blue}{\mathit{\Phi}}  \class{red}{\mathit{\Psi}} \rangle = 1 \), then the corresponding wave functions \( \class{blue}{\mathit{\Phi}} \) and \( \class{red}{\mathit{\Psi}} \) lie exactly on top of each other. They are equal.

If the inner product of two normalized states is \( \langle\class{blue}{\mathit{\Phi}}  \class{red}{\mathit{\Psi}} \rangle = 0 \), then the wave functions \( \class{blue}{\mathit{\Phi}} \) and \( \class{red}{\mathit{\Psi}} \) do not overlap at all.

All values of the inner product \( \langle\class{blue}{\mathit{\Phi}}  \class{red}{\mathit{\Psi}} \rangle \) between 1 and 0 result in a partial overlap of the two states.
Tensor product in BraKet notation
Another important operation between a bra and a ket vector is the tensor product (or more precisely outer product): \(\mathit{\Phi} \rangle ~\otimes~ \langle\mathit{\Psi} \). We can omit the tensor symbol \(\otimes\) because it is immediately clear from the braket notation that it is not a scalar or inner product: \(\mathit{\Phi} \rangle \langle\mathit{\Psi} \). The bra and ket vectors are swapped here.
The result of the tensor product is a matrix. If the states \(\mathit{\Phi} \rangle \) and \( \mathit{\Psi} \rangle \) each have only three components then we get a 3x3 matrix:
~&=~ \begin{bmatrix} \mathit{\Phi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{red}{1}} & \mathit{\Phi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{green}{2}} & \mathit{\Phi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{blue}{3}} \\ \mathit{\Phi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{red}{1}} & \mathit{\Phi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{green}{2}} & \mathit{\Phi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{blue}{3}} \\ \mathit{\Phi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{red}{1}} & \mathit{\Phi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{green}{2}} & \mathit{\Phi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{blue}{3}} \end{bmatrix} \end{align} $$
You will encounter such matrices (in form of density matrices) very often in quantum mechanics, for example when learning about quantum entanglement.
If we take a normalized state \( \mathit{\Psi} \rangle \), that is, the magnitude of this vector is 1, and form a tensor product of this state with itself, we get a projection matrix \(\mathit{\Psi} \rangle \langle\mathit{\Psi}  \) (or projection operator if no concrete components are considered):
~&=~ \begin{bmatrix} \mathit{\Psi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{red}{1}} & \mathit{\Psi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{green}{2}} & \mathit{\Psi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{blue}{3}} \\ \mathit{\Psi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{red}{1}} & \mathit{\Psi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{green}{2}} & \mathit{\Psi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{blue}{3}} \\ \mathit{\Psi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{red}{1}} & \mathit{\Psi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{green}{2}} & \mathit{\Psi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{blue}{3}} \end{bmatrix} \end{align} $$
When we apply it to any ket vector, we multiply a matrix \( \mathit{\Psi} \rangle \langle\mathit{\Psi}  \) by a column vector \( \mathit{\Phi} \rangle \):
~&=~ \begin{bmatrix} \mathit{\Psi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{red}{1}} & \mathit{\Psi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{green}{2}} & \mathit{\Psi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{blue}{3}} \\ \mathit{\Psi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{red}{1}} & \mathit{\Psi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{green}{2}} & \mathit{\Psi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{blue}{3}} \\ \mathit{\Psi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{red}{1}} & \mathit{\Psi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{green}{2}} & \mathit{\Psi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{blue}{3}} \end{bmatrix} \, \begin{bmatrix} \mathit{\Phi}_{\class{red}{1}} \\ \mathit{\Phi}_{\class{green}{2}} \\ \mathit{\Phi}_{\class{blue}{3}} \end{bmatrix} \\\\
~&=~ \begin{bmatrix} \mathit{\Psi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{red}{1}} \, \mathit{\Phi}_{\class{red}{1}} ~+~ \mathit{\Psi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{green}{2}} \, \mathit{\Phi}_{\class{green}{2}} ~+~ \mathit{\Psi}_{\class{red}{1}} \, \mathit{\Psi}^*_{\class{blue}{3}} \, \mathit{\Phi}_{\class{blue}{3}} \\ \mathit{\Psi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{red}{1}} \, \mathit{\Phi}_{\class{red}{1}} ~+~ \mathit{\Psi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{green}{2}} \, \mathit{\Phi}_{\class{green}{2}} ~+~ \mathit{\Psi}_{\class{green}{2}} \, \mathit{\Psi}^*_{\class{blue}{3}} \, \mathit{\Phi}_{\class{blue}{3}} \\ \mathit{\Psi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{red}{1}} \, \mathit{\Phi}_{\class{red}{1}} ~+~ \mathit{\Psi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{green}{2}} \, \mathit{\Phi}_{\class{green}{2}} ~+~ \mathit{\Psi}_{\class{blue}{3}} \, \mathit{\Psi}^*_{\class{blue}{3}} \, \mathit{\Phi}_{\class{blue}{3}} \end{bmatrix} \end{align} $$
The special feature of a projection matrix is: It projects the state \(  \mathit{\Phi}\rangle \) onto the state \(  \mathit{\Psi}\rangle \). In other words, it gives the part of the wave function \( \mathit{\Phi} \) that overlaps with the wave function \( \mathit{\Psi} \). The result of the projection is thus a ket vector \(  \mathit{\Psi} \rangle \langle \mathit{\Psi}  \, \mathit{\Phi}\rangle \) describing the overlap of the wavefunctions \( \mathit{\Phi} \) and \( \mathit{\Psi} \). Projection matrices are thus an important tool in theoretical physics to study the overlap of quantum states.
Basis change with the projection matrices
Probably the most important use of projection matrices is the very simple change of basis. If we have some quantum state \( \mathit{\Phi}\rangle \) and we want to look at it from a different perspective, or mathematically speaking, represent it in a different basis, then of course the first thing we do is choose the desired basis: \( \{ \mathit{\Psi}_{\class{red}{i}}\rangle \} \). This is, as you hopefully know from linear algebra, a set of orthonormal vectors \( \mathit{\Psi}_{\class{red}{1}}\rangle \), \( \mathit{\Psi}_{\class{red}{2}}\rangle \), \( \mathit{\Psi}_{\class{red}{3}}\rangle \) and so on. Their number is equal to the dimension of the Hilbert space in which these vectors live.
For the sake of demonstration, let us assume that our desired basis consists of only three basis vectors: \( \{ \mathit{\Psi}_{\class{red}{1}}\rangle, \mathit{\Psi}_{\class{red}{2}}\rangle, \mathit{\Psi}_{\class{red}{3}}\rangle \} \). With each of these basis vectors, we can construct projection matrices: \( \mathit{\Psi}_{\class{red}{1}}\rangle\langle \mathit{\Psi}_{\class{red}{1}} \), \( \mathit{\Psi}_{\class{red}{2}}\rangle\langle \mathit{\Psi}_{\class{red}{2}} \) and \( \mathit{\Psi}_{\class{red}{3}}\rangle\langle \mathit{\Psi}_{\class{red}{3}} \). To represent the quantum state \( \mathit{\Phi}\rangle \) in this basis, we next form the sum of the projection matrices:
As we know from mathematics, the sum of the projection matrices of states forming a basis is a unit matrix \( I \). The fact that the sum results in a unit matrix is very important when changing the basis, because we do not want to change the quantum state \( \mathit{\Phi}\rangle \). A unit matrix multiplied by a column vector \( \mathit{\Phi}\rangle \) does not change this vector:
Now we substitute the sum of the basis projection matrices 14
into the unit matrix in 15
:
~&=~ \left( \mathit{\Psi}_{\class{red}{1}}\rangle\langle \mathit{\Psi}_{\class{red}{1}} ~+~ \mathit{\Psi}_{\class{red}{2}}\rangle\langle \mathit{\Psi}_{\class{red}{2}} ~+~ \mathit{\Psi}_{\class{red}{3}}\rangle\langle \mathit{\Psi}_{\class{red}{3}} \right) \,  \mathit{\Phi}\rangle \\\\
~&=~ \mathit{\Psi}_{\class{red}{1}}\rangle\langle \mathit{\Psi}_{\class{red}{1}}\mathit{\Phi}\rangle ~+~ \mathit{\Psi}_{\class{red}{2}}\rangle\langle \mathit{\Psi}_{\class{red}{2}}\mathit{\Phi}\rangle ~+~ \mathit{\Psi}_{\class{red}{3}}\rangle\langle \mathit{\Psi}_{\class{red}{3}}\mathit{\Phi}\rangle
\end{align} $$
The resulting state \( \mathit{\Phi}\rangle \), while notated identically to the state before the basis change, is now represented in the basis \( \{ \mathit{\Psi}_{\class{red}{1}}\rangle, \mathit{\Psi}_{\class{red}{2}}\rangle, \mathit{\Psi}_{\class{red}{3}}\rangle \} \). We can also, for example, if we want to emphasize the new basis, give it an index \( \Psi \): \( \mathit{\Phi}\rangle_{\Psi} \). I hope you now understand how useful the concent of projection matrices is!
In general, we can write the basis change into a basis with \(n\) basis vectors \( \{ \mathit{\Psi}_{\class{red}{i}}\rangle \} \) by simply replacing the number 3 at the summation sign in 14
with \(n\):
The basis change with a finite number of basis vectors is of course exact only for states \( \mathit{\Phi}\rangle \) and \( \mathit{\Psi}\rangle \) living in finitedimensional Hilbert spaces. For states with infinitely many components, \(n\) is infinite and the state \( \mathit{\Phi}\rangle_{\Psi} \) is only an approximation in the new basis. The approximation becomes more accurate the larger \(n\) we choose. But how does the basis change work for states with infinitely many components? With an integral! For this, replace the discrete summation with a sum sign by a continuous summation with an integral:
Now you should have a solid basic knowledge of braket notation:

What bra and ket vectors are.

How you use it to form scalar product and inner product.

How you construct projection matrices with it.

How to perform a basis change with projection matrices in braket notation.
In the next lesson, you'll learn about the operators used in quantum mechanics, namely the Hermitian operators  in BraKet notation, of course.