Algorithm Complexity with Go — Linear Time Complexity O(n)

Kostiantyn Lysenko - Jul 14 - - Dev Community

Algorithm Complexity with Go — Linear Time Complexity O(n)

Today, we will focus on linear time complexity, often denoted as O(n).

Linear time complexity implies that the time required to complete the algorithm grows directly proportional to the input size. This type of complexity is efficient and clean because it scales predictably with input size. It is significant because it strikes a balance between simplicity and performance.

Analogy to Understand Linear Time Complexity

Imagine you are a postman 👮🏻‍♂️ delivering letters. You have a list of addresses, and you need to deliver one letter to each address.

If you have 10 letters to deliver, it takes 10 stops.

If you have 100 letters to deliver, it takes 100 stops.

If you have 1 000 000 letters to deliver, it takes 1 000 000 stops.

Each stop takes a consistent amount of time to park, deliver the letter, and return to the van.

The time taken grows proportionally with the number of letters.

As each stop takes the same amount of time as each memory access and write operation takes a consistent amount of time , so doubling the number of items roughly doubles the total time needed.

Real-world examples

Let’s consider the most common real-world examples of operations that typically have linear time complexity:

Iterating through a list to perform some action on each element. For example, printing each element of a list of names:

package main

import "fmt"

func main() {
    slice := []string{"Alice", "Bob", "Charlie"}
    for _, name := range slice {
        fmt.Println(name)
    }
    // Output: Alice
    // Output: Bob
    // Output: Charlie
}
Enter fullscreen mode Exit fullscreen mode

Simple search operations in an unsorted list. Finding a specific number in an unsorted list of integers:

package main

import "fmt"

func main() {
    slice := []int{30, 10, 2, 15, 130}
    target := 30

    for _, value := range slice {
       if value == target {
          fmt.Println(value)
       }
    }

    // Output: 30
}
Enter fullscreen mode Exit fullscreen mode

Summing all elements in a list:

package main

import "fmt"

func main() {
    slice := []int{1, 2, 3, 4, 5}

    sum := 0
    for _, value := range slice {
       sum += value
    }

    fmt.Println(sum) // Output: 15
}
Enter fullscreen mode Exit fullscreen mode

Copying all elements from one list to another:

package main

import "fmt"

func main() {
    slice := []int{1, 2, 3, 4, 5}

    newSlice := make([]int, len(slice))
    copy(newSlice, slice)

    fmt.Println(newSlice) // Output: [1, 2, 3, 4, 5]
}
Enter fullscreen mode Exit fullscreen mode

Merging two lists into one:

package main

import "fmt"

func main() {
    slice1 := []int{1, 2, 3}
    slice2 := []int{4, 5, 6}

    mergedSlice := append(slice1, slice2...)

    fmt.Println(mergedSlice) // Output: [1, 2, 3, 4, 5, 6]
}
Enter fullscreen mode Exit fullscreen mode

Reversing a list:

package main

import "fmt"

func main() {
    slice := []int{1, 2, 3, 4, 5}

    for i, j := 0, len(slice)-1; i < j; i, j = i+1, j-1 {
       slice[i], slice[j] = slice[j], slice[i]
    }

    fmt.Println(slice) // Output: [5, 4, 3, 2, 1]
}
Enter fullscreen mode Exit fullscreen mode

Benchmarks

Let’s benchmark copy() and confirm its linear time complexity.

Create a file named copy_benchmark_test.go:

package main

import (
    "fmt"
    "testing"
)

// copyArray copies all elements from one slice to another
func copyArray(arr []int) []int {
    newArr := make([]int, len(arr))
    copy(newArr, arr)
    return newArr
}

// BenchmarkCopyArray benchmarks the copyArray function
func BenchmarkCopyArray(b *testing.B) {
    sizes := []int{1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000}

    for _, size := range sizes {
       b.Run("Size="+fmt.Sprint(size), func(b *testing.B) {
          arr := make([]int, size)
          for i := 0; i < size; i++ {
             arr[i] = i
          }

          b.ResetTimer()

          for i := 0; i < b.N; i++ {
             _ = copyArray(arr)
          }
       })
    }
}
Enter fullscreen mode Exit fullscreen mode

Run with:

go test -bench .
Enter fullscreen mode Exit fullscreen mode

Enjoy the result:

goos: darwin
goarch: amd64
pkg: cache
cpu: Intel(R) Core(TM) i7-8569U CPU @ 2.80GHz
BenchmarkCopyArray
BenchmarkCopyArray/Size=1000
BenchmarkCopyArray/Size=1000-8 586563 1773 ns/op
BenchmarkCopyArray/Size=2000
BenchmarkCopyArray/Size=2000-8 459171 2582 ns/op
BenchmarkCopyArray/Size=3000
BenchmarkCopyArray/Size=3000-8 331647 3588 ns/op
BenchmarkCopyArray/Size=4000
BenchmarkCopyArray/Size=4000-8 263466 4721 ns/op
BenchmarkCopyArray/Size=5000
BenchmarkCopyArray/Size=5000-8 212155 5728 ns/op
BenchmarkCopyArray/Size=6000
BenchmarkCopyArray/Size=6000-8 179078 6902 ns/op
BenchmarkCopyArray/Size=7000
BenchmarkCopyArray/Size=7000-8 152635 7573 ns/op
BenchmarkCopyArray/Size=8000
BenchmarkCopyArray/Size=8000-8 142131 8423 ns/op
BenchmarkCopyArray/Size=9000
BenchmarkCopyArray/Size=9000-8 118581 9780 ns/op
BenchmarkCopyArray/Size=10000
BenchmarkCopyArray/Size=10000-8 109848 14530 ns/op
PASS
Enter fullscreen mode Exit fullscreen mode

The following chart shows the almost straight line ↗ that proves the copy() operation has linear time complexity, O(n), as the time taken increases proportionally with the size of the array:

For very large slices, the linear time complexity means the performance will degrade in direct proportion to the size of the slice , making it important to optimize such operations or consider parallel processing for performance improvement.

Summary

Knowing about linear time complexity empowers to build efficient, scalable, and maintainable software, ensuring that applications perform well across a wide range of input sizes. This foundational knowledge is essential for anyone working with algorithms and data structures.

Use linear time complexity solutions carefully only if you make sure it’s the best option and there is no way to use a linked list, map, etc. 🐈‍⬛

. . .