Greedy algorithms solve problems by making the best immediate choice at each step, aiming for a globally optimal solution. They are not always correct, but often efficient.
- Activity-Selection Problem: A key example where a greedy approach yields an optimal solution, which can also be understood via dynamic programming.
- Basics: Greedy methods and how to prove their correctness.
- Applications:
- Huffman coding (data compression)
- Cache replacement: “furthest-in-future” strategy is optimal if access sequence is known
- Other Examples: Minimum spanning trees, Dijkstra’s algorithm, greedy set-covering heuristic
Greedy algorithms are powerful and widely applicable, and studying them helps understand many fundamental algorithms.
15.1 An activity-selection problem
We want to schedule the maximum number of non-overlapping activities in a single resource. Each activity has a start and finish time. Two activities are compatible if their times do not overlap.
Dynamic Programming Approach:
- If we pick an activity, the remaining activities split into two groups: before and after the chosen activity.
- Solve each group recursively for the maximum set of compatible activities.
- Combine results: $\text{max set} = \text{before} + \text{chosen activity} + \text{after}$
Key Insight (Greedy):
Instead of checking all activities, we can always pick the activity that finishes earliest. Then only one subproblem remains, making the solution much simpler.

Making the greedy choice
Instead of solving all subproblems like in dynamic programming, we can make one simple choice: always pick the activity that finishes earliest. This leaves the resource available for as many remaining activities as possible.
- Once the earliest-finishing activity $a_1$ is chosen, the only subproblem is the set of activities that start after $a_1$ finishes. No need to consider activities that finish before $a_1$ starts.
- Let $S_k$ be activities starting after some chosen activity $a_k$. The optimal solution for the original problem consists of $a_k$ plus an optimal solution for $S_k$.
- Theorem: Choosing the earliest-finishing activity is always part of some optimal solution. If needed, we can swap it with the first activity in any maximum subset without losing optimality.