### Origin of the Ket

The origin of the ket in quantum mechanics arises from the need to describe the concept of superposition as well as a system that 1) changes with respect to time and 2) has so-called states. The ket can symbolize such a state. The concept of superposition is closely related to the mathematical operation of addition.

In the words of Paul Dirac, “the superposition process is a kind of additive process and **implies** that states can in some way be added to give new states. The states must therefore be connected with mathematical quantities of a kind which can be added together to give other quantities of the same kind. The most obvious of such quantities are **vectors**” [1]. Dirac then states “it is desirable to have a special name for describing the vectors which are connected with the states of a system in quantum mechanics, whether they are in a space of a finite or an infinite number of dimensions. We shall call them *ket vectors*, or simply *kets*, and denote a general one of them by a special symbol $\ket{}$. If we want to specify a particular one of them by a label, $A$ say, we insert it in the middle, thus $\ket{A}$.

### Superposition

Apparently, superposition is a “kind of additive process.” What I am unsure about is how superposition **implies** “that states can…be added to give new states.” Where does this implication come from? There is another section in Dirac’s book [1] that explains superposition, so it is logical to look there. It’s a long story but I’ll attempt to make it short. Dirac elaborates on the “polarization of photons” and the “interference of photons.” He claims that these are examples of superposition. Note that this claim uses the assumption that a photon exists. But this post is not about proving the existence of photons…

A sentence that summarizes the essence of superposition is “the general principle of superposition of quantum mechanics applies to the states…of any one dynamical system.” It is interesting that Dirac tends to focus on *one *system in contrast to many systems. Anyway, superposition “requires us to assume that between these states there exist peculiar relationships such that whenever the system is definitely in one state we can consider it as being partly in each of two or more other states.” In my opinion, this can be interpreted as a reductionist viewpoint, meaning that a whole system can be decomposed into its parts. But this is just one interpretation. To be clear, “the original state must be regarded as the result of a kind of *superposition* of the two or more new states…”

Furthermore, “any state may be considered as the result of a superposition of two or more other states…” Viewed in a slightly different way, “any two or more states may be superposed to give a new state.” It appears that the word ‘superposed’ is almost synonymous with the word ‘added.’ I use the words ‘almost’ and ‘appears’ because superposition does have some connotations that the mathematical operation of addition does not. Also, if any state can be formed by adding other states, it follows that quantum mechanics should be able to describe any state for any system that can be described by quantum mechanics. That is a strong implication; it implies that quantum mechanics does not break down. For quantum mechanics to be correct, it should break down when it is expected to, if the theory does break down at all.

It is possible that this idea of superposition originated in part from Fourier analysis, since Dirac states that superposition is “like the procedure of resolving a wave into Fourier components.” Anyway, I’ll run with the idea of superposition, because why not. After all, electrostatics seems like a good theory, and it relies on some type of superposition.

### Some Properties of Kets

I am just going to follow Dirac’s book for these properties. The first property is that

- “Ket vectors may be multiplied by
**complex numbers**and may be added together to give other ket vectors, e.g. from two ket vectors $\ket{A}$ and $\ket{B}$ we can form $c_1 \ket{A} + c_2 \ket{B} = \ket{R}$, say, where $c_1$ and $c_2$ are any two complex numbers.

Since any ket can be written in terms of two or more kets, this process can continue without end, leading to a never-ending sum of kets,

$\Sigma_N \ket{N} = \ket{R}$

in which $\ket{R}$ is another ket and $N$ is a label to distinguish one ket from another. In math, sometimes one can write $\int dN$ (or something similar) instead of $\sum_N$, in order to move from the discrete to the continuous, so in certain situations it might be reasonable to write

$\int dN \ket{N} = \ket{R}$

but from a Calculus standpoint, this means there exists an area under several kets or vectors, which doesn’t make much sense to me visually. Nevertheless, maybe it is still valid to write this, in some abstract sense.

In short, a ket is a symbol for a vector. By convention, this vector is a column vector. On the other hand, a row vector is symbolized by a bra $\bra{}$. The labelling convention for a bra is analogous to the case of a ket, so it is allowable to label a bra with $B$ by writing $\bra{B}$. Now it is possible to multiply a bra with a ket, to obtain a scalar product, assuming the dimensions of the vectors match in the middle; this multiplication is written as $\langle B| A \rangle$, a 1×1 matrix or a scalar. On the other hand, $\ket{A}\bra{B}$ is a matrix depending on the dimensions of the ket and the bra. Notice that the inner dimension of $\ket{A}\bra{B}$ is 1×1 regardless of the other two dimensions, so it is possible to produce a non-square matrix with $\ket{A}\bra{B}$.