Array based vs Linked List based

Doubly Linked Lists Singly Linked Lists Arrays
Prepend O(1) O(1) O(n) / O(1) with a Ring Buffer
Append O(1) O(n) / O(1) with a Tail reference O(1)
InsertAt O(n) O(n) O(n)
RemoveFirst O(1) O(1) O(n) / O(1) with memory leak / O(1) with a Ring Buffer
RemoveLast O(1) O(n) O(1)
RemoveAt O(n) O(n) O(n)
Aspect Circular Buffer (Array) Deque Doubly Linked List Deque
Memory Usage Efficient: Only stores elements, but resizing doubles memory usage. Higher: Requires extra memory for pointers with each element.
Cache Performance (CPU) High cache locality because elements are stored contiguously in an array. Poor cache locality as nodes are scattered in memory.
Scalability May require resizing, which can be expensive. Scales smoothly with dynamic memory allocation.

Stacks

Leverage arrays’ O(1) time complexity of Insert End and Delete End to implement LIFO (Last In First Out) manner and the allocated vacant memory can be easily reused.

It is used in: DFS, Undo Operations, Parentheses.

type Stack []int

func (s *Stack) Push(val int) {
	*s = append(*s, val)
}

func (s *Stack) Pop() (int, bool) {
	if len(*s) == 0 {
		return -1, false
	}
	pop := (*s)[len(*s)-1]

	// Optional: clean the value in underlying array
	(*s)[len(*s)-1] = 0

	*s = (*s)[:len(*s)-1]
	return pop, true
}

func (s *Stack) Peak() (int, bool) {
	if len(*s) == 0 {
		return -1, false
	}
	return (*s)[len(*s)-1], true
}

The pointer receiver ensures the methods operate on the original Stack rather than a copy, which is essential for changing the length.

Bounded Stack

A fixed size Stack

type BoundedStack struct {
	buffer []int
	size   int
}

func NewBoundedStack(capacity int) BoundedStack {
	return BoundedStack{
		buffer: make([]int, capacity),
	}
}

func (bs *BoundedStack) IsFull() bool {
	return bs.size == len(bs.buffer)
}

func (bs *BoundedStack) IsEmpty() bool {
	return bs.size == 0
}

func (bs *BoundedStack) Push(val int) bool {
	if bs.IsFull() {
		return false
	}
	bs.buffer[bs.size] = val
	bs.size++
	return true
}

func (bs *BoundedStack) Pop() (int, bool) {
	if bs.IsEmpty() {
		return -1, false
	}
	pop := bs.buffer[bs.size-1]
	bs.buffer[bs.size-1] = 0 // Optional
	bs.size--
	return pop, true
}

func (bs *BoundedStack) Peak() (int, bool) {
	if bs.IsEmpty() {
		return -1, false
	}
	return bs.buffer[bs.size-1], true
}

Queues

It is used in: BFS, Producer-Consumer Problems

Head Tracking

Even though arrays cannot Remove First efficiently, but we still can keep track of head to implement efficient FIFO (First In First Out) manner. However, the unused memory before head are wasted.

type Queue struct {
	buffer []int
	head   int
}

func (q *Queue) Enqueue(val int) {
	q.buffer = append(q.buffer, val)
}

func (q *Queue) Dequeue() (int, bool) {
	// Check if the queue is empty
	if q.head >= len(q.buffer) {
		return -1, false
	}
	pop := q.buffer[q.head]
	
	// Optional: clean the value in underlying array
	q.buffer[q.head] = 0
	
	q.head++
	return pop, true
}

Go slicing can simplify ****the code.

type Queue []int

func (q *Queue) Enqueue(val int) {
	*q = append(*q, val)
}

func (q *Queue) Dequeue() (int, bool) {
	if len(*q) == 0 {
		return -1, false
	}
	pop := (*q)[0]
	
	// Optional: clean the value in underlying array
	(*q)[0] = 0

	*q = (*q)[1:]
	return pop, true
}

The pointer receiver ensures the methods operate on the original Queue rather than a copy, which is essential for changing the length.

Circular/Ring Buffer