Skip to main content

Time Complexities

Big-O Algorithm Complexity Cheat Sheet (Know Thy Complexities!) @ericdrowell

Big O notation (with a capital letter O, not a zero), also called Landau's symbol, is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines. It describes how the runtime or space requirement of a function grows as the input grows.

Two functions with the same Big-O notation will tend to have the same growth rate and thus have the same relative performance with large inputs.

For example, the bubble sort algorithm has an average time complexity of O(n^2) while merge sort and heap sort both have an average complexity of O(n log n). In average cases, merge sort and heap sort will demonstrate similar performance while they will both outperform bubble sort.

In the table, poly(x) = x^O^*^(1), i.e., polynomial in x.

NameComplexity classRunning time (T(n))Examples of running timesExample algorithms
constant timeO(1)10Determining if an integer (represented in binary) is even or odd
inverse Ackermann timeO(alpha(n))
iterated logarithmictimeO(log*n)Distributed coloring of cycles
log-logarithmicO(log logn)Amortized time per operation using a boundedpriority queue
logarithmic timeDLOGTIMEO(logn)logn, log(n2)Binary search
polylogarithmic timepoly(logn)(logn)2
fractional power (sqrt(n)) (sublinear polynomials)O(nc)where 0 < c < 1n1/2,n2/3Integer factorization, Searching in akd-tree, Grover's algorithm (Grover's algorithmis a quantum algorithm for searching an unsorted database of n entries in O(sqrt(n))time.) (String algorithm like longest common prefix, where you do not have to see every data - small oh)
linear timeO(n)nFinding the smallest or largest item in an unsortedarray
"n log star n" timeO(nlog*n)Seidel's polygon triangulationalgorithm.
quasilinear time/linearithmicO(nlogn)nlogn, logn!Fastest possible comparison sort; Fast Fourier transform, Merge Sort
quadratic timeO(n2)n2Bubble sort; Insertion sort; Direct convolution (Check all doubles)
cubic timeO(n3)n3Naive multiplication of twon×nmatrices. Calculating partial correlation. (Check all triples)
Pseudo-polynomial time
polynomial timeP2O(logn)= poly(n)n,nlogn,n10Karmarkar's algorithm for linear programming;AKS primality test
quasi-polynomial timeQP2poly(logn)nloglogn,nlognBest-known O(log2n)-approximation algorithm for the directed Steiner tree problem.
sub-exponential time (first definition)SUBEXPO(2nε) for all ε>0O(2lognlog logn)Assuming complexity theoretic conjectures,BPPis contained in SUBEXP.
sub-exponential time (second definition)2o(n)2^n1/3Best-known algorithm forinteger factorizationandgraph isomorphism
exponential time (with linear exponent)E2^O(n)1.1n, 10nSolving thetraveling salesman problemusingdynamic programming
exponential timeEXPTIME2^poly(n)2n, 2n2Solvingmatrix chain multiplicationviabrute-force search (Exhaustive Search / Check all subsets)
factorial timeO(n!)n!Solving the traveling salesman problem via brute-force search
double exponential time2-EXPTIME2^2^poly(n)22nDeciding the truth of a given statement in Presburger arithmetic

image

big-o-notations-cheatsheet

https://en.wikipedia.org/wiki/Time_complexity

EP132: Big O Notation 101: The Secret to Writing Efficient Algorithms

Pseudo-polynomial time

In computational complexity theory, a numeric algorithm runs inpseudo-polynomial timeif its running time is a polynomial in thenumeric valueof the input (the largest integer present in the input) - but not necessarily in thelengthof the input (the number of bits required to represent it), which is the case for polynomial time algorithms.

In general, the numeric value of the input is exponential in the input length, which is why a pseudo-polynomial time algorithm does not necessarily run in polynomial time with respect to the input length.

https://en.wikipedia.org/wiki/Pseudo-polynomial_time