Master Selection Sort - O(n²) Algorithm

Learn selection sort with step-by-step visualizations. Watch minimum element selection and efficient swapping in real-time. Understand minimal swap optimization and complexity analysis.

O(n²) All Cases
O(n) Swaps Max
Not Stable
🔍
Find Minimum
📊
Step-by-Step
Minimal Swaps
📚
Educational
Powered by orbit2x.com

Selection Sort Visualization

Live
Pass: 0 / 0
Comparisons: 0
Swaps: 0

Enter array values to visualize

Use sample data or custom input

Unsorted
Comparing
Current Min
Swapping
Sorted

Array Input & Controls

Default: Ascending

Code Implementation

def selection_sort(arr):
    """
    Selection Sort - finds minimum and swaps to correct position
    Time: O(n²) all cases - no optimization possible
    Space: O(1) - in-place sorting
    """
    n = len(arr)

    for i in range(n - 1):
        # Find minimum element in unsorted region
        min_idx = i

        for j in range(i + 1, n):
            if arr[j] < arr[min_idx]:
                min_idx = j

        # Swap minimum with first unsorted element
        if min_idx != i:
            arr[i], arr[min_idx] = arr[min_idx], arr[i]

    return arr

# Usage examples
arr1 = [64, 25, 12, 22, 11]
print(selection_sort(arr1))  # [11, 12, 22, 25, 64]

# Already sorted - still O(n²)
arr2 = [1, 2, 3, 4, 5]
print(selection_sort(arr2))  # No optimization - always n(n-1)/2 comparisons

Big O Complexity

Time O(n²)

Always quadratic - no optimization

Space O(1)

In-place sorting - constant space

Current Action
Ready to sort

Complexity Cases

Best Case O(n²)

No early exit - always n(n-1)/2 comparisons

Average Case O(n²)

Same as best - not adaptive to input

Worst Case O(n²)

But only O(n) swaps - minimal swapping!

📋
Properties

Stable: No - may change equal order

In-place: Yes - O(1) extra space

Adaptive: No - ignores input order

Did You Know?

Finds minimum element and places it in correct position each pass

Makes at most n-1 swaps - useful when swapping is expensive

Sorted region grows from left side (unlike bubble sort)

Cannot detect if array is already sorted - no optimization possible

When to Use Selection Sort

Selection sort excels when minimizing swaps is critical, even though comparisons remain O(n²).

🔄

Expensive Swaps

When swapping is costly (large objects, database records, flash memory), selection sort's O(n) maximum swaps outperform bubble sort's O(n²).

At most n-1 swaps
Database sorting
📚

Learning Tool

Excellent for teaching sorting concepts. "Find minimum and place it" is intuitive and demonstrates selection-based algorithms clearly.

Easy to understand
Visual demonstration
📊

Tiny Datasets

Performs acceptably on very small arrays (n < 10). Simple implementation with minimal overhead makes it practical for embedded systems.

Low memory footprint
Simple code
💾

Flash Memory

Flash memory has limited write cycles. Minimizing swaps (writes) extends hardware lifespan. Selection sort's O(n) swaps are ideal.

Minimal writes
Hardware longevity
🔧

Embedded Systems

O(1) space complexity makes it perfect for microcontrollers and IoT devices with extremely limited RAM availability.

In-place sorting
No extra memory
⚙️

FPGA Implementation

Simple comparison logic maps well to hardware circuits. Parallel minimum-finding stages can speed up execution in custom chips.

Simple circuits
Parallelizable

When NOT to Use Selection Sort

Large datasets: Always O(n²) comparisons. Use QuickSort O(n log n) or MergeSort instead.

Nearly sorted data: No optimization possible. Insertion sort or bubble sort perform better.

Stability required: Not stable - may reorder equal elements. Use insertion or merge sort.

Performance critical: Cannot detect sorted arrays. Use adaptive algorithms like Timsort.

Selection Sort vs Bubble Sort

S

Selection Sort

✓ Swaps: O(n) maximum - minimal!

✗ Comparisons: Always O(n²)

✗ Adaptive: No - ignores input

✗ Stable: No

Best when swapping is expensive

B

Bubble Sort

✗ Swaps: O(n²) worst case

✓ Comparisons: O(n) best with optimization

✓ Adaptive: Yes - early exit

✓ Stable: Yes

Best when data is nearly sorted

Free Selection Sort Algorithm Visualizer: Learn Sorting Step-by-Step Online

Master selection sort algorithm with interactive visualization, step-by-step animation, and real-time complexity analysis. Perfect for computer science students, coding interview preparation, and understanding O(n²) sorting algorithms with minimal swaps.

What Is Selection Sort Algorithm (And Why Learn It)?

Selection sort is an in-place comparison sorting algorithm that divides the array into sorted and unsorted regions, repeatedly selecting the minimum element from unsorted portion and moving it to sorted portion. Known for performing the minimum number of swaps (O(n) swaps), it's a fundamental algorithm taught in 90% of data structures courses according to GeeksforGeeks Selection Sort Guide.

While selection sort has O(n²) time complexity like bubble sort, it performs significantly fewer swaps—exactly n-1 swaps in all cases. This makes it efficient for systems where write operations are expensive (like flash memory or EEPROM). Understanding selection sort builds foundation for more advanced selection algorithms like quickselect and heap sort.

Why Selection Sort Is Essential for Learning:

Memory-Efficient Sorting
  • Minimal swaps: Only n-1 swaps regardless of input (vs O(n²) for bubble sort)
  • In-place sorting: O(1) space complexity with no auxiliary arrays
  • Write-efficient: Ideal for flash memory where writes are costly
  • Simple logic: Easy to understand and implement in any programming language
Algorithm Analysis Skills
  • Consistent performance: Always O(n²) comparisons regardless of input order
  • Not adaptive: Performance doesn't improve on partially sorted data
  • Not stable: Relative order of equal elements may change
  • Interview common: Frequently asked in coding assessments and whiteboard rounds

Real Selection Sort Example

Input Array: [64, 25, 12, 22, 11] Unsorted array with 5 elements
Sorted Array: [11, 12, 22, 25, 64] After 4 passes: 10 comparisons, 4 swaps

How to Use the Selection Sort Visualizer in 3 Steps

1
Enter your array: Type comma-separated numbers into the input field (e.g., 64, 25, 12, 22, 11) or click quick examples for best case, worst case, or random arrays. Our visualizer supports up to 20 elements for optimal animation clarity and step-by-step learning.
2
Choose visualization mode: Click "Sort Array" for full automatic animation, or use "Next Step" for step-by-step control to understand each minimum-finding operation and swap. Adjust animation speed from slow (500ms) to fast (100ms) for detailed observation or quick demonstration.
3
Watch and learn: See color-coded bars change—blue for current minimum, yellow for comparing, red for swapping, green for sorted. Track real-time statistics: comparisons, swaps, passes, and current complexity. Compare with our bubble sort visualizer to understand trade-offs.

💡 Pro Tip: Memory-Constrained Systems

Selection sort is preferred over bubble sort in systems where memory writes are expensive (flash storage, EEPROM). With only O(n) swaps vs O(n²) for bubble sort, it minimizes write operations by 90%+ on average inputs. Mention this optimization in interviews when discussing real-world algorithm selection for embedded systems.

How Selection Sort Algorithm Works (Step-by-Step)

1
Initialize Sorted Boundary:

Start with an empty sorted region at the beginning of the array. The outer loop runs n-1 times (where n = array length), with each iteration expanding the sorted region by one element. The boundary between sorted and unsorted regions moves rightward after each pass.

2
Find Minimum Element:

Scan through the unsorted region to find the minimum element. Start by assuming the first unsorted element is minimum, then compare with each remaining element. Update minimum index whenever a smaller element is found. This linear search takes O(n-i) comparisons for pass i. Our visualizer highlights the current minimum in blue.

3
Swap to Sorted Position:

After finding the minimum, swap it with the first element of the unsorted region (current position). This moves the minimum to its final sorted position. Unlike bubble sort which swaps continuously, selection sort performs exactly one swap per pass. Watch red-colored bars in our visualizer to see swap operations.

4
Move Boundary and Repeat:

Move the sorted/unsorted boundary one position right, reducing the unsorted region size by 1. Repeat the minimum-finding and swapping process for the remaining unsorted elements. After n-1 iterations, the entire array is sorted. The algorithm is non-adaptive—it always performs n(n-1)/2 comparisons regardless of input order (sorted, reverse, or random).

5
Final Sorted Array:

After n-1 passes, all elements are in ascending order. The last element is automatically in correct position (no need for nth pass). Sorted elements turn green in our visualizer. Note that selection sort is not stable—equal elements may not maintain their original relative order, making it unsuitable for multi-key sorting.

Selection Sort Time & Space Complexity Analysis

CaseTime ComplexityComparisonsSwapsWhen It Occurs
Best CaseO(n²)n(n-1)/20 to n-1Array already sorted (still scans all)
Average CaseO(n²)n(n-1)/2n-1Random/partially sorted array
Worst CaseO(n²)n(n-1)/2n-1Array sorted in reverse order
SpaceO(1)Constant extra spaceIn-place sorting (no auxiliary array)

Performance Example: 100 Elements

Best Case (Sorted)
4,950 comparisons
0 swaps
~10ms execution
Average Case (Random)
4,950 comparisons
99 swaps
~12ms execution
Worst Case (Reverse)
4,950 comparisons
99 swaps
~12ms execution

Selection Sort vs Bubble Sort Performance

For 100 random elements, bubble sort performs ~2,475 swaps while selection sort performs exactly 99 swaps— a 96% reduction in write operations! This makes selection sort 2-3x faster than bubble sort on systems with expensive write operations (flash memory, databases, networked storage).

When to Use Selection Sort (And When NOT To)

✓ Good Use Cases

  • Flash memory systems: Minimize write operations for NAND/NOR flash
  • EEPROM storage: Embedded systems with limited write cycles
  • Small datasets: Arrays with n < 50 where simplicity matters
  • Memory constraints: O(1) space requirement for tiny systems
  • Educational purposes: Teaching sorting and algorithm analysis basics
  • Auxiliary data writes: When swap operation involves complex/costly writes

✗ Bad Use Cases

  • Large datasets: n > 100 elements—use O(n log n) algorithms instead
  • Nearly sorted data: No adaptive advantage (always O(n²))
  • Stable sort needed: Equal elements may change relative order
  • Real-time systems: Consistent O(n²) performance unacceptable
  • Production systems: Use language built-ins (quicksort/mergesort/timsort)
  • Big data: Millions of records require O(n log n) minimum

Better Alternatives for Different Scenarios:

Insertion Sort

O(n) for nearly sorted—adaptive and stable

Quick Sort

O(n log n) average—fastest general-purpose

Heap Sort

O(n log n) guaranteed + O(1) space

Explore more sorting algorithms in our interactive algorithms collection.

Selection Sort vs Bubble Sort vs Insertion Sort

All three are O(n²) comparison sorts, but with different strengths and trade-offs:

FeatureSelection SortBubble SortInsertion Sort
ComparisonsAlways O(n²)O(n) to O(n²)O(n) to O(n²)
SwapsO(n) - Best!O(n²) - WorstO(n²)
AdaptiveNoYes (with flag)Yes - Best!
StableNoYesYes
Best CaseO(n²)O(n)O(n)
OnlineNoNoYes
Best forFlash memoryEducationNearly sorted

Quick Comparison Summary

  • Choose Selection Sort: When write operations are expensive (flash/EEPROM storage)
  • Choose Bubble Sort: When stability matters or for educational demonstrations
  • Choose Insertion Sort: When data is nearly sorted or arriving in streaming fashion
  • Choose None: For n > 100, use O(n log n) algorithms like quicksort or mergesort

Implementing Selection Sort in Popular Languages

Selection sort implementation follows the same pattern across languages—find minimum, swap once per pass:

🐍

Python

Python's enumerate() and min() make selection sort elegant. Outer loop iterates positions, inner loop finds minimum index in unsorted region. Use tuple unpacking for clean swap syntax. Track minimum index, not value.

Key Pattern:

min_idx = i + arr[i+1:].index(min(arr[i+1:])), then swap

JavaScript/TypeScript

JavaScript uses traditional loops with let for mutability. Track minIndex with Math.min() or manual comparison. Destructuring assignment makes swapping clean. Identical in TypeScript with type annotations added for arrays.

Key Pattern:

[arr[i], arr[minIdx]] = [arr[minIdx], arr[i]]

Java

Java requires traditional three-line swap with temp variable. Strongly typed int[] arrays. Manual linear search for minimum in inner loop. Popular in Android development and enterprise systems needing write-efficient sorting.

Key Pattern:

if (arr[j] < arr[minIdx]) minIdx = j; then swap

⚙️

C++, Go, Rust

C++ uses std::swap() and iterators. Go supports tuple assignment like Python. Rust enforces memory safety with slice borrowing. All share the same O(n²) time, O(n) swap pattern—language syntax differs but algorithm identical.

Key Pattern:

std::swap(arr[i], arr[minIdx]) or equivalent

📚 Complete Code Examples & Developer Tools

Full Implementations: Visit our algorithms collection for tested selection sort code in 10+ languages including Python, JavaScript, Java, C++, Go, Rust, PHP, and Swift.

Development Tools: Test your code with our code formatter, diff checker, and regex tester.

15 Selection Sort Interview Questions (With Answers)

1. What is the time complexity of selection sort?

Answer: O(n²) for all cases (best, average, worst). Unlike bubble sort, selection sort is not adaptive—it always performs n(n-1)/2 comparisons regardless of input order. Space complexity is O(1) since it sorts in-place. The consistent O(n²) performance makes it predictable but slow for large datasets.

2. Why does selection sort have fewer swaps than bubble sort?

Answer: Selection sort performs exactly n-1 swaps (one per pass), while bubble sort performs O(n²) swaps in worst case. Selection sort finds the minimum once per pass then swaps once; bubble sort swaps immediately whenever adjacent elements are out of order. This makes selection sort 10-100x more efficient on systems where write operations are costly (flash memory, databases, networked storage).

3. Is selection sort stable?

Answer: No, standard selection sort is not stable. When swapping the minimum element to the front, it can jump over equal elements and change their relative order. Example: [4a, 3, 4b, 2] becomes [2, 3, 4b, 4a]—the 4s swapped order. Stability can be achieved with linked lists or additional bookkeeping, but at the cost of complexity. Use insertion sort or merge sort if stability is required.

4. When would you use selection sort in production?

Answer: Use selection sort when: (1) Sorting small arrays (n < 20) on memory-constrained devices, (2) Write operations are expensive (flash memory, EEPROM with limited write cycles), (3) Simplicity is critical for embedded systems, (4) Auxiliary memory is unavailable. For most production scenarios, use O(n log n) algorithms like quicksort, mergesort, or language built-ins (Python's Timsort, Java's dual-pivot quicksort).

5. How many comparisons does selection sort make?

Answer: Exactly n(n-1)/2 comparisons for all inputs. Pass 1: n-1 comparisons, pass 2: n-2, ..., pass n-1: 1 comparison. Sum = (n-1) + (n-2) + ... + 1 = n(n-1)/2. For n=10: 45 comparisons. For n=100: 4,950 comparisons. This is identical to bubble sort's worst case, but selection sort maintains this count even on sorted inputs (non-adaptive behavior).

6. Can you implement selection sort recursively?

Answer: Yes. Base case: array of size 1 is sorted. Recursive case: find minimum in array, swap to front, recursively sort remainder. Space complexity becomes O(n) due to call stack (vs O(1) iterative). Pseudocode: selectionSort(arr, n) if n==1 return; minIdx = findMin(arr, n); swap(arr[0], arr[minIdx]); selectionSort(arr+1, n-1); . Rarely used in practice—iterative version preferred.

7. What's the difference between selection sort and heap sort?

Answer: Both repeatedly select minimum/maximum, but heap sort uses a heap data structure for O(log n) selection vs O(n) linear search. Selection sort: O(n²) total time (n passes × O(n) minimum-finding). Heap sort: O(n log n) total time (n passes × O(log n) heap operations). Both are in-place with O(1) space. Heap sort is an optimized version of selection sort. Learn heaps at our heap visualizer.

8. How do you implement selection sort in descending order?

Answer: Find the maximum element instead of minimum in each pass. Change the comparison operator from < to > when tracking the extreme value. Swap maximum to the front (or minimum to the back if you prefer). All complexity analysis remains identical—just reversed sort direction. The algorithm structure doesn't change.

9. Can selection sort be optimized?

Answer: Limited optimization available: (1) Bidirectional selection sort—find both min and max per pass, place at both ends (reduces passes by half but still O(n²)), (2) Skip swap if minIdx == i (element already in place), (3) Stop early if last n elements are already sorted (rare optimization). Cannot achieve O(n) best case like bubble/insertion sort due to non-adaptive nature.

10. Why is selection sort called "selection" sort?

Answer: The algorithm "selects" the smallest (or largest) element from the unsorted portion in each pass. This selected element is then placed in its final sorted position. The name emphasizes the selection process (finding minimum) rather than the exchange process (swapping), distinguishing it from exchange-based sorts like bubble sort.

11. Can selection sort work with linked lists?

Answer: Yes, but implementation differs. Instead of swapping values, rearrange node pointers. Find minimum node, update pointers to move it to sorted region. Time complexity remains O(n²) but constant factors may be higher due to pointer manipulation. For linked lists, merge sort is typically better (O(n log n) with O(1) space). See our linked list tutorial for sorting techniques optimized for pointer-based structures.

12. What is the space complexity of selection sort?

Answer: O(1) auxiliary space—selection sort is in-place. It only requires a few variables: loop counters (i, j), minimum index (minIdx), and temp swap variable—all constant space regardless of input size. No auxiliary arrays, no recursion stack (iterative version). This makes it suitable for embedded systems with strict memory constraints.

13. Is selection sort adaptive?

Answer: No, selection sort is not adaptive. It always performs n(n-1)/2 comparisons and n-1 swaps regardless of whether the input is sorted, reverse-sorted, or random. It cannot take advantage of existing order in the data. This contrasts with adaptive algorithms like insertion sort (O(n) on sorted data) or optimized bubble sort (early exit on sorted arrays).

14. What are selection sort applications in embedded systems?

Answer: Selection sort excels in: (1) Microcontroller sensor data sorting (10-20 readings), (2) Flash/EEPROM storage where writes wear out memory cells, (3) Real-time systems needing predictable O(n²) performance (no best/worst case variance), (4) Memory-constrained devices (Arduino, PIC) with O(1) space, (5) Safety-critical systems requiring simple, verifiable code. Modern embedded systems often use counting sort or radix sort for integer data.

15. How does selection sort compare to insertion sort?

Answer: Selection sort: O(n) swaps, O(n²) always, not adaptive, not stable, better for expensive writes. Insertion sort: O(n²) swaps worst case, O(n) best case, adaptive, stable, better for nearly-sorted data. Choose selection sort for write-limited storage; choose insertion sort for streaming data or nearly-sorted inputs. Both are O(1) space and in-place. For n > 100, neither is optimal—use quicksort or mergesort.

🎯 Interview Strategy

When discussing selection sort in interviews, emphasize the O(n) swap advantage and explain write-cost scenarios (flash memory). Compare with bubble sort (more swaps) and insertion sort (adaptive). Mention heap sort as the optimized O(n log n) version of selection sort. This demonstrates understanding beyond memorization—what interviewers value most in algorithm discussions.

Frequently Asked Questions About Selection Sort

How many passes does selection sort need?

Exactly n-1 passes for n elements. Each pass selects the minimum from the unsorted region and places it in the sorted region, expanding sorted area by 1. After n-1 passes, the last element is automatically in correct position. Unlike bubble sort with early exit, selection sort cannot reduce passes—always runs all n-1 iterations.

Why use selection sort over insertion sort?

Use selection sort when write operations are expensive (flash memory, EEPROM, databases with write amplification). Selection sort performs O(n) swaps vs O(n²) for insertion sort in worst case. However, insertion sort is better for nearly-sorted data (O(n) best case vs O(n²) for selection sort) and is stable. Choose based on whether write cost or adaptivity matters more for your use case.

Is selection sort in-place?

Yes, selection sort is in-place with O(1) auxiliary space. It sorts by rearranging elements within the original array using only a constant amount of extra memory (loop counters, minimum index, temp swap variable). No auxiliary arrays or significant extra storage required, making it suitable for memory-constrained environments like embedded systems and microcontrollers.

Can selection sort handle duplicate elements?

Yes, selection sort correctly handles duplicates and sorts them into grouped positions. However, it may change the relative order of equal elements (not stable). For example, if you have multiple records with the same sort key, their original order may not be preserved. If stability is required for multi-key sorting, use stable algorithms like insertion sort, merge sort, or Timsort.

What is the best case input for selection sort?

Selection sort has no "best case" in terms of time complexity—it's always O(n²) regardless of input order. Whether the array is sorted, reverse-sorted, or random, it performs the same n(n-1)/2 comparisons. The only minor optimization is that sorted arrays may require fewer actual swaps if elements are already in place (but still n-1 swap checks). This non-adaptive behavior distinguishes it from insertion sort and optimized bubble sort.

How does selection sort perform on nearly sorted data?

Selection sort performs identically on nearly-sorted and random data—always O(n²) comparisons and O(n) swaps. It gains no advantage from existing order (non-adaptive). For nearly-sorted data, use insertion sort (O(n) best case) or optimized bubble sort with early exit. Selection sort's predictable performance can be beneficial in real-time systems requiring consistent timing, but it's inefficient when better alternatives exist.

What real-world systems use selection sort?

Limited production use: (1) Embedded systems sorting sensor arrays (5-20 values) with flash storage, (2) BIOS/UEFI firmware with strict memory limits, (3) Simple microcontrollers (Arduino-class) sorting small datasets, (4) Educational software teaching algorithm fundamentals, (5) Legacy industrial control systems with predictable timing requirements. Most modern systems use O(n log n) algorithms or language built-ins for better performance.

Helpful Developer Tools for Coding Practice

Enhance your algorithm development workflow with professional utilities:

Start Learning Selection Sort Visually

Master selection sort algorithm with step-by-step visualization. Watch minimum selection, swaps, and complexity analysis in real-time. Perfect for students, developers, and coding interview prep. 100% free, no signup required.

Interactive Animation
Minimal Swaps (O(n))
Code Examples (10 Languages)
Interview Q&A Bank
Try Interactive Visualizer Now ↑

Join 100,000+ students learning algorithms visually on Orbit2x