This visualization can visualize the recursion tree of a recursive algorithm or the recursion tree of a Divide and Conquer (D&C) algorithm recurrence.

You can also visualize the Directed Acyclic Graph (DAG) of a Dynamic Programming (DP) algorithm.

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.This is the Recursion Tree and Recursion Directed Acyclic Graph (DAG) visualization area. The Recursion Tree/DAG are drawn/animated as per how a real computer program that implements this recursion works, i.e., "depth-first".

The current parameter value is shown inside each vertex (comma-separated for recursion with two or more parameters). Active vertices will be colored orange. Vertices that are no longer calling any other recursive problem (the base cases) will be colored green. Vertices (subproblems) that are repeated will be colored lightblue for the second occurrence onwards. The return value of each recursive call is written as a red text below the vertex.

Note that due to combinatorial explosion, it will be very hard to visualize the Recursion Tree for large instances.

For the Recursion DAG, it will also very hard to minimize the number of edge crossings in the event of overlapping subproblems.

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.Select one of the example recursive algorithms or write your own recursive code — in JavaScript. Note that the visualization can run *any* JavaScript code, including malicious code, so please be careful (it will only affect your own web browser, don't worry).

Click the 'Run' button at the top right corner of the action box to start the visualization after you have selected (or written) a valid JavaScript code!

In the next sub-sections, we start with example recursive algorithms with just one sub-problem, i.e., not branching. For these one-subproblem examples, their recursion trees and recursion DAGs are 100% identical (they looked like Singly Linked Lists from the root (initial call) to the leaf (base case). As there is no overlapping subproblem for the examples in this category, you will not see any lightblue-colored vertex and only one green-colored vertex (the base case).

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The Factorial Numbers example computes the factorial of an integer **N**.

f(n) = n*f(n-1) otherwise

It is one of the simplest (tail) recursive function that can be easily rewritten into an iterative version. It's time complexity is also simply O(n).

The value of Factorial f(n) grows very fast, thus try only the small values of n ≤ 10.

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The Greatest Common Divisor (GCD) example computes the GCD of two integers `a` and `b`.

f(a, b) = f(b, a%b) otherwise

This is the classic Euclid's algorithm that runs in O(log min(a, b)).

Due to its low time complexity, it should be OK to try `0 ≤ a, b ≤ 99`.

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The Max Range Sum example computes the value of the subarray with the maximum total sum inside the given array `ai` with `n` integers (the first textbox below the code editor textbox). The value of `ai` can be positive integers, zeroes, or negative integers (without any negative integer, the answer will obviously the sum of the entire integers in `ai`).

Formally, let's define `RSQ(i, j) = a1[i] + a1[i+1] + ... + a1[j]`, where `0 ≤ i ≤ j ≤ n-1` (RSQ stands for Range Sum Query). Max Range Sum problem seeks to find the optimal `i` and `j` such that `RSQ(i, j)` is the maximum.

f(i) = max(f(i-1) + ai[i], 0) otherwise

We call `f(n-1)`. The largest value of `f(i)` is the answer.

This is the classic Kadane's algorithm that runs in O(n).

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The Catalan example computes the **N**-th catalan number recursively.

[This slide is a stub and will be expanded later].

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.In the next sub-sections, we will see example recursive algorithms that have exactly two sub-problems, i.e., branching. The sizes of the subproblems can be identical or vary. For these one-subproblem examples, their recursion trees will usually be much bigger that their recursion DAGs (especially if there are (many) overlapping sub-problems, indicated with the lightblue vertices on the recursion tree drawing).

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The Fibonacci Numbers example computes the **N**-th Fibonacci number.

f(n) = f(n-1) + f(n-2) otherwise

Unlike Factorial example, this time each recursive step recurses to two other smaller sub-problems (if we call f(n-1) first before f(n-2), then the left side of the recursion tree will be taller than the right side — try swapping the two sub-problems). It can still be written in iterative fashion after one understands the concept of Dynamic Programming. Fibonacci recursion tree (and DAG) are frequently used to showcase the basic idea of recursion, its inefficiency, and the linkage to Dynamic Programming topic.

The value of Fibonacci(n) grows very fast and the Recursion Tree also grows exponentially, i.e., at least Ω(2^{n/2}), thus try only the small values of n ≤ 7 (to avoid crashing your web browser). It's Recursion DAG only contains O(n) vertices and thus can go to a larger n ≤ 20 (to still looks nice in this visualization).

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The C(n, k) example computes the binomial coefficient C(n, k).

f(n, k) = 1 (if k == n); // 1 way to take everything out of n items

otherwise take the last item or skip it

f(n, k) = f(n-1, k-1) + f(n-1, k)

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The 0-1 Knapsack example solves the __0/1 Knapsack Problem__: What is the maximum value that we can get, given a knapsack that can hold a maximum weight of w, where the value of the i-th item is a1[i], the weight of the i-th item is a2[i]?

[This slide is a stub and will be expanded later].

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.In the next sub-sections, we will see example recursive algorithms that have many sub-problems (1, 2, 3, ..., a certain limit). For many of these examples, the sizes of their Recursion Trees are exponential and we will need to use Dynamic Programming to compute its Recursion DAGs instead.

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The Coin Change example solves the __Coin Change problem__: Given a list of coin values in a1, what is the minimum number of coins needed to get the value v?

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The Longest Increasing Subsequence example solves the __Longest Increasing Subsequence__ problem: Given an array a1, how long is the Longest Increasing Subsequnce of the array?

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The Traveling Salesman example solves the __Traveling Salesman Problem__ on small graph: How long is the shortest path that goes from city 0, passes through every city once, and goes back again to 0? The distance between city i and city j is denoted by a1[i][j].

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.The Graph Matching problem computes the maximum number of __matching__ on a **small** graph, which is given in the adjacency matrix a1.

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.In the next sub-sections, instead of visualizing the Recursion Tree of a recursive algorithm, we visualize the recursion tree of the recurrence (equation) of the time complexity of certain Divide and Conquer (D&C) algorithms.

The value computed by `f(n)` (the red label underneath each vertex that signifies the return value of that recursive function/that subproblem) is thus the **total** number of operations taken by that recursive algorithm when its problem size is `n` (the value drawn inside each vertex). Most textbooks will say the function name of this recurrence as `T(n)`, but we choose not to change our default `f(n)` function name that is used in all other recursive algorithm visualizations. Some other textbooks (e.g., CLRS) also put the cost of each vertex only, not the cost of the entire subproblem.

PS: there is a silly sync bug whenever you switch e-Lecture slides, so

the current visualization manually.In __Sorting__ visualization, we learn about merge sort. It's time complexity recurrences are:

Please ensure that you see the recursion tree of the default example (n = 16). Click `n = 16` written inside the root vertex and its return value (total amount of work done by `f(16)` is `32+32+1*16 = 80`). This value of `f(n)` is consistent throughout the recursion tree, e.g., `f(8) = f(4)+f(4)+c*4 = 12+12+1*8 = 32`.

We see that

the height of

this recursion tree

is log_{2} n

as we keep

halving n by 2

until we reach

the base case

of size 1.

For n = 16, we have

16->

8->

4->

2->

1 (log_{2} 16 = 4 steps).

PS: height of tree =

the number of edges

from root to

the deepest leaf.

As the effort done in the recursive step per subproblem of size n is c*n (the divide (trivial, Θ(1)) and the conquer (merge) operation, the Θ(n)), we will perform exactly c*n operations per each recursion level of **this** specific recursion tree.

The root of size (n) does c*n operations during the merge step.

The two children of the root of size (n/2) both do c*n/2, and 2*c*n/2 = c*n too.

The grandchildren level is 4*c*n/4 = c*n too.

And so on until the last level (the leaves).

As the red label underneath each vertex in this visualization reports the value of the entire subproblem (including the subtrees below), these identical costs per level are not easily seen, e.g., from root to leaf, we see 80, 2x32 = 64, 4x12 = 48, 8x4 = 32, 16x1 = 16 and may get different conclusion... However, if we discounted the values of its subproblems, we will get the same conclusion, e.g., for the root, we do 80-2x32 = 16 operations, for the children of the root, we do 2x(32-2x12) = 2x8 = 16 operations too, etc.

The number of green leaves is 2^{log2 n} = n^{log2 2} = n.

Each of these leaf does Θ(1) step, thus the total work of the last (leaf) level is also Θ(n).

The total work done by Merge sort is thus c*n per level, multiplied by the height of the recursion tree (log_{2} n + 1 more for the leaves), or Θ(n log n).

For this example, `f(16) = 80` from 1x16 x (log_{2} 16 + 1) = 16 x (4 + 1) = 16 x 5 = 80.

In __Sorting__ visualization, we also learn about the non-random(ized) quick sort.

It may have a worst case behavior of O(n^{2}) on certain kind of (trivial) instances of (nearly-) sorted input and it may have the following time complexity recurrence (with `a = 1`):

Note that writing the recurrence in the other direction does not matter much asymptotically, other than the recursion tree will be mirrored.

Click

We want to show that this recursion tree has f(n) = O(n^{2}).

We see that

the height of

this recursion tree

is rather tall, i.e., `n/a-1`

as we only reduces `n`

by `a` per level.

Thus, we need `n/a-1` steps

to reach the base case

(`n = 1`).

For `n = 16, a = 1`, we have

16->

15->

14->

...->

2->

1 (16/1 - 1 = 15 steps).

As the effort done in the recursive step per subproblem of size `n` is `c*n` (divide (the partition) operation in Θ(n); the conquer step is trivial — Θ(1)), we will perform exactly c*n operations per each recursion level.

The root of size (n) does c*n operations during the partition step.

The children of the root of size (n-a) does c*(n-1) and the other does f(a).

The grandchildren level does c*(n-2) and the other does f(a).

And so on until the last level (the leaves both does f(a)).

The total work done by Quick sort on this worst-case input is the sum of arithmetic progression series of `1+2+...+n` plus a few other constant factor operations (all the `f(a)` are Θ(1)). This simplifies to `f(n) = Θ(n ^{2})`.

For recurrences of the form:

where a ≥ 1, b > 1, and g(n) is asymptotically positive,

we maybe able to apply the master method/theorem.

PS: In this visualization, we have to rename CLRS function names to our convention:`f(n) → g(n)` and `T(n) → f(n)`.

We compare the driving function `g(n)` (the amount of divide and conquer work in each recursive step of size `n`) with `n ^{logba}` (the watershed function — also the asymptotic number of leaves of the recursion tree), if

Visually, if you see the recursion tree (

to ensure that you see the correct picture) for recurrence that falls into case 1 category, the cost per level grows exponentially from root level to the leaves (in this picture, 1*4*4 = 16, 7*2*2 = 28, 49*1*1 = 49, ..., 16+28+49 = 93), and the total cost of the leaves dominates the total cost of all internal vertices.The most popular example is __Strassen's algorithm for matrix multiplication__ where case 1 of master theorem is applicable. The recurrence is: `f(n) = 7*f(n/2) + c*n*n`.

Thus

`n ^{2} = O(n^{2.807-ε}) for ε = 0.807...` — case 1 — Thus

Exercise: You can try changing the demo code by setting `a = 8` and set `g(n)` from `c*n*n` to `c*1` to change the recurrence of Strassen's algorithm to the recurrence of the simple recursive matrix multiplication algorithm. For this one, `f(n) = Θ(n ^{3})`.

The detailed analysis of the Merge sort algorithm from a few slides earlier can be simplified using master theorem, but case 2, e.g., `f(n) = 2*f(n/2) + n`.

`Thus a = 2, b = 2, watershed = n^{log2 2} = n, driving = f(n) = Θ(n).`

`n = Θ(n log ^{k} n)` for

Visually, if you see the recursion tree (`n ^{logba} log^{k} n`) and there are about

Exercise: You can try changing the demo code by setting `a = 1` and set `g(n)` from `c*n` to `c*1` to change the recurrence of Merge sort algorithm to the recurrence of the binary search algorithm. For this one, `f(n) = Θ(log n)`.

Case 3 is the opposite of Case 1, where the driving function `g(n)` grows polynomially faster than the watershed function `n ^{logba}`. Thus the bulk of the operations is done by the driving function at the root level (but check the regularity condition too, to be elaborated below). This case 3 is actually rarely appear in real algorithms so we use an example recurrence:

Thus `a = 4`, `b = 2`, watershed = `n ^{log2 4} = n^{2}, driving = f(n) = Θ(n^{3}).`

`n^3 = Ω(n ^{2+ε}) for ε = 1` and

Visually, if you see the recursion tree (**drops** exponentially from root level to the leaves (in this picture, 1*4*4*4 = 64, 4*2*2*2 = 32, 16*1*1*1 = 16, ..., 64+32+16 = 112), and the total cost of the root dominates the total cost of all other internal vertices (including the (many) leaves).