Published on : Apr 22, 2026

Top 10 Must-Know Algorithms for Beginners

Top algorithms to master programming fundamentals quickly

6 Minutes Read
Gradient

Palash Somani

Principal Engineer Dream11

Top 10 Algorithms

Top 10 Must-Know Algorithms Every Beginner Should Learn

k (8).png

Every senior engineer you look up to started with the same foundation: a small set of algorithms that taught them how to think. Not just how to write code, but how to design efficient solutions, predict performance, and reason through problems before touching a keyboard.

These 10 algorithms are that foundation.

Binary Search teaches you to think logarithmically. Bubble Sort teaches you nested loops and swapping. Merge Sort teaches divide and conquer. Quick Sort teaches partitioning. BFS and DFS teach you graph traversal. Dijkstra’s teaches you greedy strategy on weighted graphs.

Every coding interview at Google, Amazon, and Microsoft tests your knowledge of these exact algorithms. Every competitive programming problem builds on these building blocks. Every system you will ever design at scale draws on these core ideas.

This blog covers all 10: what each algorithm does, how it works step by step, clean implementations in Python, JavaScript, Java, and C, and the complexity you need to know for interviews.

What is an Algorithm?

image (7).png

An algorithm is a step-by-step set of instructions for solving a problem or performing a computation. It takes an input, processes it, and produces an output.

Input → [Algorithm: defined steps] → Output

Example:
Input:  Unsorted list [5, 2, 8, 1, 9]
Algorithm: Compare adjacent elements, swap if out of order, repeat
Output: Sorted list [1, 2, 5, 8, 9]

A good algorithm is correct (produces the right answer), efficient (uses minimal time and memory), and clear (understandable and maintainable).

How to Judge an Algorithm

Every algorithm is evaluated on two dimensions:

TIME COMPLEXITY: How does execution time grow as input size grows?
  O(1)       Constant    → Same speed for any input size (best)
  O(log n)   Logarithmic → Doubles input, adds 1 step
  O(n)       Linear      → Doubles input, doubles steps
  O(n log n) Linearithmic→ Efficient sorting algorithms
  O(n²)      Quadratic   → Doubles input, quadruples steps
  O(2ⁿ)      Exponential → Doubles input, squares all steps (worst)

SPACE COMPLEXITY: How much extra memory is needed?
  O(1)  Constant   → No extra memory proportional to input
  O(n)  Linear     → Memory grows with input size
  O(log n) Log     → Recursion depth for divide-and-conquer

When an interviewer says “can you do better?”, they are asking you to reduce time or space complexity.

Algorithm 1: Binary Search

image (8).png

What It Is

Binary Search finds a target value in a sorted array by repeatedly halving the search space. Instead of checking every element (O(n)), it eliminates half the remaining elements each step (O(log n)).

How It Works

Array: [2, 5, 8, 12, 16, 23, 38, 56, 72, 91]
Target: 23

Step 1: low=0, high=9, mid=4 → arr[4]=16 < 23 → search RIGHT half
Step 2: low=5, high=9, mid=7 → arr[7]=56 > 23 → search LEFT half
Step 3: low=5, high=6, mid=5 → arr[5]=23 == 23 → FOUND at index 5

Without Binary Search: could check up to 10 elements
With Binary Search:    only 3 comparisons needed

Code

def binary_search(arr, target):
    low, high = 0, len(arr) - 1

    while low <= high:
        mid = (low + high) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            low = mid + 1      # Target in right half
        else:
            high = mid - 1     # Target in left half

    return -1  # Not found

arr = [2, 5, 8, 12, 16, 23, 38, 56, 72, 91]
print(binary_search(arr, 23))   # 5
print(binary_search(arr, 10))   # -1

Complexity: Time O(log n) | Space O(1) Requirement: Array must be sorted.

Algorithm 2: Bubble Sort

image (9).png

What It Is

Bubble Sort repeatedly compares adjacent elements and swaps them if they are in the wrong order. After each pass, the largest unsorted element “bubbles up” to its correct position at the end.

How It Works

Array: [64, 34, 25, 12, 22]

Pass 1: [34, 25, 12, 22, 64]  ← 64 bubbles to end
Pass 2: [25, 12, 22, 34, 64]  ← 34 bubbles to position
Pass 3: [12, 22, 25, 34, 64]  ← 25 bubbles to position
Pass 4: [12, 22, 25, 34, 64]  ← already sorted

Code

def bubble_sort(arr):
    n = len(arr)
    for i in range(n):
        swapped = False
        for j in range(0, n - i - 1):
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]
                swapped = True
        if not swapped:    # Optimization: stop if already sorted
            break
    return arr

print(bubble_sort([64, 34, 25, 12, 22]))  # [12, 22, 25, 34, 64]

Complexity: Time O(n²) worst/average, O(n) best (already sorted) | Space O(1) Best for: Teaching sorting concepts. Not used in production for large data.

Algorithm 3: Selection Sort

image (10).png

What It Is

Selection Sort divides the array into a sorted part and an unsorted part. In each pass, it finds the minimum element from the unsorted part and places it at the beginning.

How It Works

Array: [64, 25, 12, 22, 11]

Pass 1: min=11 → swap with index 0 → [11, 25, 12, 22, 64]
Pass 2: min=12 → swap with index 1 → [11, 12, 25, 22, 64]
Pass 3: min=22 → swap with index 2 → [11, 12, 22, 25, 64]
Pass 4: min=25 → already in place  → [11, 12, 22, 25, 64]

Code

def selection_sort(arr):
    n = len(arr)
    for i in range(n):
        min_idx = i
        for j in range(i + 1, n):
            if arr[j] < arr[min_idx]:
                min_idx = j
        arr[i], arr[min_idx] = arr[min_idx], arr[i]
    return arr

print(selection_sort([64, 25, 12, 22, 11]))  # [11, 12, 22, 25, 64]

Complexity: Time O(n²) always | Space O(1) Advantage over Bubble Sort: Fewer swaps (at most n-1 swaps total).

Algorithm 4: Insertion Sort

image (11).png

What It Is

Insertion Sort builds a sorted array one element at a time. It picks each element and inserts it into its correct position among the already-sorted elements, shifting larger elements right to make room.

How It Works

Array: [12, 11, 13, 5, 6]

Step 1: [11, 12, 13, 5, 6]  ← 11 inserted before 12
Step 2: [11, 12, 13, 5, 6]  ← 13 already in place
Step 3: [5, 11, 12, 13, 6]  ← 5 inserted at beginning
Step 4: [5, 6, 11, 12, 13]  ← 6 inserted after 5

Code

def insertion_sort(arr):
    for i in range(1, len(arr)):
        key = arr[i]
        j   = i - 1
        while j >= 0 and arr[j] > key:
            arr[j + 1] = arr[j]    # Shift right
            j -= 1
        arr[j + 1] = key           # Insert in correct position
    return arr

print(insertion_sort([12, 11, 13, 5, 6]))  # [5, 6, 11, 12, 13]

Complexity: Time O(n²) worst, O(n) best (already sorted) | Space O(1) Best for: Small datasets and nearly-sorted arrays. Used in practice for small n.

Algorithm 5: Merge Sort

image (12).png

What It Is

Merge Sort uses divide and conquer: it splits the array in half recursively until each subarray has one element, then merges the sorted halves back together. It guarantees O(n log n) in all cases.

How It Works

Array: [38, 27, 43, 3]

Split:  [38, 27] | [43, 3]
Split:  [38] [27] | [43] [3]
Merge:  [27, 38]    [3, 43]
Merge:  [3, 27, 38, 43]

Code

def merge_sort(arr):
    if len(arr) <= 1:
        return arr

    mid   = len(arr) // 2
    left  = merge_sort(arr[:mid])
    right = merge_sort(arr[mid:])

    return merge(left, right)

def merge(left, right):
    result = []
    i = j  = 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:
            result.append(left[i]); i += 1
        else:
            result.append(right[j]); j += 1
    result.extend(left[i:])
    result.extend(right[j:])
    return result

print(merge_sort([38, 27, 43, 3, 9, 82]))  # [3, 9, 27, 38, 43, 82]

Complexity: Time O(n log n) always | Space O(n) for the merge step Best for: Large datasets where guaranteed performance matters.

Algorithm 6: Quick Sort

image (13).png

What It Is

Quick Sort picks a pivot element and partitions the array so all elements smaller than the pivot go left and all larger elements go right. It then recursively sorts both sides. Average case is O(n log n) and in-place.

How It Works

Array: [10, 7, 8, 9, 1, 5], pivot = last element (5)

Partition step:
  Elements < 5:  [1]
  Pivot:         [5]
  Elements > 5:  [10, 7, 8, 9]

After partition: [1, 5, 10, 7, 8, 9]
                       ↑ pivot in final position

Recursively sort [1] and [10, 7, 8, 9]
Final: [1, 5, 7, 8, 9, 10]

Code

def quick_sort(arr):
    if len(arr) <= 1:
        return arr

    pivot  = arr[len(arr) // 2]
    left   = [x for x in arr if x < pivot]
    middle = [x for x in arr if x == pivot]
    right  = [x for x in arr if x > pivot]

    return quick_sort(left) + middle + quick_sort(right)

print(quick_sort([10, 7, 8, 9, 1, 5]))  # [1, 5, 7, 8, 9, 10]

Complexity: Time O(n log n) average, O(n²) worst (already sorted array with poor pivot) | Space O(log n)

Algorithm 7: Linear Search

image (14).png

What It Is

Linear Search checks every element one by one until the target is found or the array ends. No sorting required. Simple but slow for large arrays.

How It Works

Array: [5, 3, 8, 1, 9], target = 8

Check index 0: 5 ≠ 8
Check index 1: 3 ≠ 8
Check index 2: 8 = 8 → FOUND at index 2

Code

def linear_search(arr, target):
    for i, val in enumerate(arr):
        if val == target:
            return i
    return -1

print(linear_search([5, 3, 8, 1, 9], 8))   # 2
print(linear_search([5, 3, 8, 1, 9], 10))  # -1

Complexity: Time O(n) | Space O(1) Use when: Array is unsorted or too small to benefit from Binary Search.

Algorithm 8: Depth-First Search (DFS)

image (15).png

What It Is

DFS explores a graph by going as deep as possible along each branch before backtracking. It uses a stack (or recursion) and a visited set to avoid cycles.

How It Works

Graph:  0 - 1 - 3
         \\
          2 - 4

DFS from 0:
  Visit 0 → go deep to 1 → go deep to 3 (dead end)
  Backtrack to 1 → no more neighbours
  Backtrack to 0 → visit 2 → visit 4 (dead end)

Output: [0, 1, 3, 2, 4]

Code

def dfs(graph, start, visited=None):
    if visited is None: visited = set()
    visited.add(start)
    result = [start]
    for neighbour in graph[start]:
        if neighbour not in visited:
            result += dfs(graph, neighbour, visited)
    return result

graph = {0:[1,2], 1:[0,3], 2:[0,4], 3:[1], 4:[2]}
print(dfs(graph, 0))   # [0, 1, 3, 2, 4]

Complexity: Time O(V + E) | Space O(V) Best for: Cycle detection, topological sort, connected components.

Algorithm 9: Breadth-First Search (BFS)

image (16).png

What It Is

BFS explores a graph level by level using a queue, visiting all nodes at distance 1 before distance 2. It guarantees the shortest path in unweighted graphs.

How It Works

Graph:  0 - 1 - 3
         \\
          2 - 4

BFS from 0:
  Level 0: [0]        Visit 0
  Level 1: [1, 2]     Visit all neighbours of 0
  Level 2: [3, 4]     Visit all neighbours of 1 and 2

Output: [0, 1, 2, 3, 4]

Code

from collections import deque

def bfs(graph, start):
    visited = {start}
    queue   = deque([start])
    result  = []

    while queue:
        node = queue.popleft()
        result.append(node)
        for neighbour in graph[node]:
            if neighbour not in visited:
                visited.add(neighbour)
                queue.append(neighbour)
    return result

graph = {0:[1,2], 1:[0,3], 2:[0,4], 3:[1], 4:[2]}
print(bfs(graph, 0))   # [0, 1, 2, 3, 4]

Complexity: Time O(V + E) | Space O(V) Best for: Shortest path (unweighted), level-order traversal, connected components.

Algorithm 10: Dijkstra’s Shortest Path

image (17).png

What It Is

Dijkstra’s algorithm finds the shortest path from a source to all other nodes in a weighted graph with non-negative edge weights. It uses a priority queue (min heap) to always process the closest unvisited node first.

How It Works

Graph (weighted):
  0 --4-- 1 --1-- 3
  |               |
  2               9
  |               |
  2 --5---+-------+

Dijkstra from node 0:
  Start: dist = [0, ∞, ∞, ∞]
  Process 0: update neighbours 1 (dist=4), 2 (dist=2)
  Process 2: update 1 (dist=2+5=7→no, keep 4)
  Process 1: update 3 (dist=4+1=5)
  Final distances from 0: {0:0, 1:4, 2:2, 3:5}

Code

import heapq

def dijkstra(graph, start):
    """
    graph: {node: [(neighbour, weight), ...]}
    Returns shortest distances from start to all nodes.
    """
    dist = {node: float('inf') for node in graph}
    dist[start] = 0
    heap = [(0, start)]   # (distance, node)

    while heap:
        d, node = heapq.heappop(heap)
        if d > dist[node]: continue   # Skip outdated entries

        for neighbour, weight in graph[node]:
            new_dist = dist[node] + weight
            if new_dist < dist[neighbour]:
                dist[neighbour] = new_dist
                heapq.heappush(heap, (new_dist, neighbour))

    return dist

graph = {
    0: [(1, 4), (2, 2)],
    1: [(3, 1)],
    2: [(1, 5)],
    3: []
}
print(dijkstra(graph, 0))  # {0: 0, 1: 4, 2: 2, 3: 5}

Complexity: Time O((V + E) log V) with min heap | Space O(V) Requirement: Non-negative edge weights only.

Algorithm Complexity Cheat Sheet

Algorithm

Best Time

Average Time

Worst Time

Space

Stable?

Binary Search

O(1)

O(log n)

O(log n)

O(1)

N/A

Linear Search

O(1)

O(n)

O(n)

O(1)

N/A

Bubble Sort

O(n)

O(n²)

O(n²)

O(1)

Yes

Selection Sort

O(n²)

O(n²)

O(n²)

O(1)

No

Insertion Sort

O(n)

O(n²)

O(n²)

O(1)

Yes

Merge Sort

O(n log n)

O(n log n)

O(n log n)

O(n)

Yes

Quick Sort

O(n log n)

O(n log n)

O(n²)

O(log n)

No

DFS

O(V+E)

O(V+E)

O(V+E)

O(V)

N/A

BFS

O(V+E)

O(V+E)

O(V+E)

O(V)

N/A

Dijkstra

O((V+E) log V)

O((V+E) log V)

O((V+E) log V)

O(V)

N/A

Which Algorithm Should You Learn First?

Week 1: Binary Search + Linear Search
  Why: Teaches you O(log n) vs O(n) thinking immediately.
  Builds: Foundation for every search problem.

Week 2: Bubble Sort + Insertion Sort
  Why: Teaches nested loops, swapping, and O(n²) patterns.
  Builds: Core understanding of sorting mechanics.

Week 3: Merge Sort + Quick Sort
  Why: Teaches divide and conquer, recursion, O(n log n).
  Builds: Most important sorting knowledge for interviews.

Week 4: DFS + BFS
  Why: Teaches graph traversal, the foundation of all graph problems.
  Builds: Skills needed for trees, graphs, paths, and components.

Week 5: Dijkstra's Algorithm
  Why: Teaches weighted shortest path, greedy strategy, priority queues.
  Builds: Ability to solve GPS, routing, and network problems.

After these 10, learn: Dynamic Programming, Binary Trees,
Hash Tables, and Greedy Algorithms.

Conclusion

These 10 algorithms are the foundation of everything you will build as a programmer. Here is the full recap:

  1. Searching: Binary Search (O(log n), sorted arrays) and Linear Search (O(n), any array). Always prefer Binary Search for sorted data.

  2. Sorting (simple): Bubble Sort, Selection Sort, and Insertion Sort. All O(n²) worst case. Learn them to understand sorting mechanics. Use Insertion Sort for small or nearly-sorted arrays.

  3. Sorting (efficient): Merge Sort (O(n log n) always, stable, extra space) and Quick Sort (O(n log n) average, in-place, fastest in practice).

  4. Graph Traversal: DFS (stack, deep-first, cycle detection, topological sort) and BFS (queue, level-first, shortest path in unweighted graphs). Both O(V + E).

  5. Weighted Shortest Path: Dijkstra’s Algorithm (priority queue, O((V+E) log V), non-negative weights only).

The single most important skill these algorithms build is not memorising the code. It is learning to think about time and space complexity before writing a single line.

FAQ

FREQUENTLY ASKED QUESTIONS

Binary Search, BFS, DFS, and Merge Sort come up most frequently in interviews at product-based companies. Binary Search appears in many non-obvious problems (find first/last position, minimum in rotated array). BFS and DFS appear in almost every graph and tree problem. Dijkstra’s is tested less frequently but is important for system design and weighted graph questions.
BFS uses a queue (FIFO) and explores level by level. It guarantees the shortest path in unweighted graphs. DFS uses a stack or recursion and goes as deep as possible before backtracking. DFS is better for cycle detection, topological sort, and exhaustive path exploration. BFS is better for shortest path and level-order processing.
Use Merge Sort when you need guaranteed O(n log n) performance regardless of input, when sorting linked lists, or when stability (preserving order of equal elements) matters. Use Quick Sort when sorting arrays where average-case performance matters and you want in-place sorting with minimal memory usage. Quick Sort is generally faster in practice despite the same Big O notation.
No. Binary Search requires a sorted array because it relies on the fact that eliminating one half is safe: if the target is greater than the middle element, it cannot be in the left half. If the array is unsorted, this assumption breaks and Binary Search gives wrong answers. Use Linear Search for unsorted arrays.
BFS finds the shortest path by hop count (number of edges) in unweighted graphs, treating all edges as equal. Dijkstra’s finds the shortest path by total weight in weighted graphs where edges have different costs. BFS uses a regular queue. Dijkstra’s uses a priority queue (min heap) to always process the lowest-cost node next. If all edge weights are equal (weight = 1), Dijkstra’s reduces to BFS.
Quick Sort’s worst case occurs when the pivot consistently divides the array into one empty part and one part of size n-1. This happens when the array is already sorted and you always pick the first or last element as the pivot. Each partition step does O(n) work, and with n partitions, total work is O(n²). Choosing a random pivot or using the median-of-three strategy largely avoids this.
A stable sort preserves the relative order of equal elements. If two elements have the same value, a stable sort guarantees the one that appeared first in the input will appear first in the output. Merge Sort and Insertion Sort are stable. Quick Sort and Selection Sort are not stable by default. Stability matters when sorting objects by multiple fields (e.g., sort by name, then by age).
For searching: if sorted, use Binary Search (O(log n)). If unsorted, use Linear Search (O(n)). For sorting: if n is small, use Insertion Sort. If n is large and stability matters, use Merge Sort. If n is large and you want in-place, use Quick Sort. For graphs: if you need shortest path in unweighted graphs, use BFS. For connectivity and cycle detection, use DFS. For shortest path in weighted graphs, use Dijkstra’s.