*Complexity theory* is a method for comparing different algorithms regardless of their implementation. In complexity theory, an estimated number of steps is usually given depending on the input length. The fewer steps the algorithm needs, the better.
When comparing algorithms in complexity theory, we do not compare the actual runtime that the algorithm needs, but how the number of steps scales with the input length. A key observation is that scaling is much more important than the prefactors in the functions.
![[complexity_theory.excalidraw.light.svg]]
In the figure, we can compare the number of steps needed for the same problem by three different algorithms (red, orange and blue). The orange and blue algorithm both use a linear number of steps: if the input length doubles, the number of steps doubles as well. However, they scale with a different prefactor. The red algorithms need quadratically many steps to finish the task: if we double the input length, the number of steps grows by a factor of four.
Even though the red algorithm needs fewer steps for small input lengths, at some point the quadratic dependence will make it run slower than the linear algorithms in orange and blue (the intersections are marked by grey dots).
The key insight of complexity theory is that we can compare algorithms for large input lengths by just checking the scaling of the algorithm instead of the exact functional dependence.
In computer science, complexity theory is used to classify the difficulty of a problem. If a problem is in a certain **complexity class**, then we can deduce that is might be too hard for a classical computer/quantum computer to solve.
>[!read]- Further Reading
>- [[Algorithm]]
>- [[Quantum Algorithm]]
>- [[Classical Information]]
>- [[Classical Computer]]
>- [[Quantum Computer]]
>[!ref]- References