An algorithm is a well-defined computational procedure that takes input, performs a finite sequence of steps, and produces output. It’s both a step-by-step process and a tool for solving computational problems — each problem defines an input–output relationship, and the algorithm provides the specific method to achieve it.
Example – Sorting problem:
Input: a sequence of numbers ⟨a₁, a₂, …, aₙ⟩
Output: a permutation ⟨a′₁, a′₂, …, a′ₙ⟩ such that a′₁ ≤ a′₂ ≤ … ≤ a′ₙ
Example instance: ⟨31, 41, 59, 26, 41, 58⟩ → ⟨26, 31, 41, 41, 58, 59⟩
Sorting is a fundamental operation used by many programs, and the best algorithm depends on factors like data size, order, constraints, and hardware.
Correctness:
An algorithm is correct if it halts (finishes its computing in finite time) and always produces the right output.
Incorrect algorithms may still be useful if their error rate is controlled.
Applications of algorithms:
Algorithms often tackle problems with an enormous number of possible solutions, where testing each option is infeasible. Instead, they find efficient ways to reach correct or optimal results.
Examples include:
All these problems share two key traits:
Some problems, like the Fourier transform, don’t have a clear set of discrete candidate solutions but still rely on algorithmic procedures to compute results efficiently.
Data structures
Organize and store data to enable efficient access and modification. Choosing the right structure is crucial for designing effective algorithms.