From dynamic memory allocation through red-black trees, graph algorithms, and dynamic programming — every data structure implemented from scratch, every algorithm proved correct.
DSA-specific depth you won’t find in the market.
No library calls. You implement every linked list, every tree, every graph from raw memory allocation. 96 algorithms for linked lists alone — 23 operations across 4 list types. You don’t use a data structure. You build one.
You don’t just test algorithms with sample inputs and hope. You learn to reason about iterative algorithms using pre-conditions, post-conditions, and loop invariants — and about recursive algorithms using mathematical induction.
Every data structure is mapped to the virtual address space of a running process. Stack frames, heap allocations, pointer chains — you see where your nodes, edges, and arrays actually live in memory.
Binary search trees are the starting point, not the destination. You implement AVL trees, red-black trees, and B/B+ trees — each with full insert, delete, search, and rebalancing algorithms.
Not just “here’s Dijkstra’s algorithm.” You learn the greedy design strategy first, then see how Dijkstra’s, Prim’s, and Kruskal’s are specific applications. Each algorithm understood as a design choice, not a recipe.
You start with recursion, encounter overlapping subproblems, discover intractability, invent memoisation, and arrive at the DP paradigm. The pedagogy mirrors the historical discovery.
Nine modules, each building on the foundations laid before.
Dynamic memory allocation, pointer mechanics, C++ classes, templates, and STL containers — the toolkit you need before any data structure.
Looping techniques, recursive function design, loop invariants, and mathematical induction — the reasoning framework for every algorithm that follows.
Bubble sort through heap sort, linear search through hashing — the fundamental operations inside every advanced data structure.
Dynamic arrays, stacks, queues, deques, priority queues, and matrix operations — array-based structures from contiguous memory.
Four linked list types with 23 operations each. Then linked-list-based stacks, queues, deques, sets, and disjoint sets — 96+ algorithms.
BST, AVL, red-black, and B/B+ trees — full traversal, insertion, deletion, and rebalancing.
Adjacency lists, DFS, BFS, Dijkstra, Bellman-Ford, Prim, Kruskal — modelling relationships and networks.
From overlapping recursion to memoisation to the DP paradigm — rod cutting, matrix chain multiplication, LCS.
Step counting, recurrence equations, and the five asymptotic notations.
This is the full course content across ~65 weekend sessions.
The prerequisite toolkit — dynamic memory, pointer mechanics, classes, templates, and STL containers that every subsequent module depends on.
malloc(), calloc(), realloc(), and free()vector, list, deque, arraymultimapThe reasoning framework for every algorithm in the course — loop invariants for iteration, mathematical induction for recursion.
The fundamental operations that recur inside every advanced data structure — from bubble sort to heap sort, from linear search to hashing.
Contiguous memory structures — dynamic arrays, stacks, queues, priority queues, and the matrix operations that underlie numerical computing.
Four linked list types, 23 operations each, every algorithm implemented from scratch — then linked-list-based stacks, queues, sets, and disjoint sets.
array and array to linked list conversionThe non-linear structures that make search logarithmic — BST, height-balanced trees, and disk-optimised trees, each with full implementation.
From mathematical definition to adjacency list implementation — traversals, shortest paths, and minimum spanning trees, each understood as a design choice.
The journey from intractable recursion to efficient tabulation — memoisation, optimal substructure, and the DP design paradigm applied to classical problems.
The mathematical language for reasoning about efficiency — step counting, recurrence equations, and the five asymptotic notations that formalise algorithmic performance.
You should be comfortable with the following topics in C before joining this course. These are not taught in the course — they are assumed.
Batch 25 starts 9th May 2026. First week is open for all.