





This visualization can visualize the recursion tree of a recursive algorithm or the recursion tree of a Divide and Conquer (D&C) algorithm recurrence.
You can also visualize the Directed Acyclic Graph (DAG) of a Dynamic Programming (DP) algorithm.
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.Remarks: By default, we show e-Lecture Mode for first time (or non logged-in) visitor.
If you are an NUS student and a repeat visitor, please login.
This is the Recursion Tree and Recursion Directed Acyclic Graph (DAG) visualization area. The Recursion Tree/DAG are drawn/animated as per how a real computer program that implements this recursion works, i.e., "depth-first".
The current parameter value is shown inside each vertex (comma-separated for recursion with two or more parameters). Active vertices will be colored orange. Vertices that are no longer calling any other recursive problem (the base cases) will be colored green. Vertices (subproblems) that are repeated will be colored lightblue for the second occurrence onwards. The return value of each recursive call is written as a red text below the vertex.
Note that due to combinatorial explosion, it will be very hard to visualize the Recursion Tree for large instances.
For the Recursion DAG, it will also very hard to minimize the number of edge crossings in the event of overlapping subproblems.
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.Pro-tip 1: Since you are not logged-in, you may be a first time visitor (or not an NUS student) who are not aware of the following keyboard shortcuts to navigate this e-Lecture mode: [PageDown]/[PageUp] to go to the next/previous slide, respectively, (and if the drop-down box is highlighted, you can also use [→ or ↓/← or ↑] to do the same),and [Esc] to toggle between this e-Lecture mode and exploration mode.
Select one of the example recursive algorithms or write your own recursive code — in JavaScript. Note that the visualization can run any JavaScript code, including malicious code, so please be careful (it will only affect your own web browser, don't worry).
Click the 'Run' button at the top right corner of the action box to start the visualization after you have selected (or written) a valid JavaScript code!
In the next sub-sections, we start with example recursive algorithms with just one sub-problem, i.e., not branching. For these one-subproblem examples, their recursion trees and recursion DAGs are 100% identical (they looked like Singly Linked Lists from the root (initial call) to the leaf (base case). As there is no overlapping subproblem for the examples in this category, you will not see any lightblue-colored vertex and only one green-colored vertex (the base case).
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.Pro-tip 2: We designed this visualization and this e-Lecture mode to look good on 1366x768 resolution or larger (typical modern laptop resolution in 2021). We recommend using Google Chrome to access VisuAlgo. Go to full screen mode (F11) to enjoy this setup. However, you can use zoom-in (Ctrl +) or zoom-out (Ctrl -) to calibrate this.
The Factorial Numbers example computes the factorial of an integer N.
f(n) = 1 (if n == 0);
f(n) = n*f(n-1) otherwise
It is one of the simplest (tail) recursive function that can be easily rewritten into an iterative version. It's time complexity is also simply O(n).
The value of Factorial f(n) grows very fast, thus try only the small values of n ≤ 10.
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.Pro-tip 3: Other than using the typical media UI at the bottom of the page, you can also control the animation playback using keyboard shortcuts (in Exploration Mode): Spacebar to play/pause/replay the animation, ←/→ to step the animation backwards/forwards, respectively, and -/+ to decrease/increase the animation speed, respectively.
The Greatest Common Divisor (GCD) example computes the GCD of two integers a and b.
f(a, b) = a (if b == 0);
f(a, b) = f(b, a%b) otherwise
This is the classic Euclid's algorithm that runs in O(log min(a, b)).
Due to its low time complexity, it should be OK to try 0 ≤ a, b ≤ 99.
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.The Max Range Sum example computes the value of the subarray with the maximum total sum inside the given array ai with n integers (the first textbox below the code editor textbox). The value of ai can be positive integers, zeroes, or negative integers (without any negative integer, the answer will obviously the sum of the entire integers in ai).
Formally, let's define RSQ(i, j) = a1[i] + a1[i+1] + ... + a1[j], where 0 ≤ i ≤ j ≤ n-1 (RSQ stands for Range Sum Query). Max Range Sum problem seeks to find the optimal i and j such that RSQ(i, j) is the maximum.
f(i) = max(ai[0], 0) (if i == 0, as ai[0] can be negative);
f(i) = max(f(i-1) + ai[i], 0) otherwise
We call f(n-1). The largest value of f(i) is the answer.
This is the classic Kadane's algorithm that runs in O(n).
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.The Catalan example computes the N-th catalan number recursively.
[This slide is a stub and will be expanded later].
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.In the next sub-sections, we will see example recursive algorithms that have exactly two sub-problems, i.e., branching. The sizes of the subproblems can be identical or vary. For these one-subproblem examples, their recursion trees will usually be much bigger that their recursion DAGs (especially if there are (many) overlapping sub-problems, indicated with the lightblue vertices on the recursion tree drawing).
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.The Fibonacci Numbers example computes the N-th Fibonacci number.
f(n) = n (if n <= 1); // i.e., 0 if n == 0 or 1 if n == 1
f(n) = f(n-1) + f(n-2) otherwise
Unlike Factorial example, this time each recursive step recurses to two other smaller sub-problems (if we call f(n-1) first before f(n-2), then the left side of the recursion tree will be taller than the right side — try swapping the two sub-problems). It can still be written in iterative fashion after one understands the concept of Dynamic Programming. Fibonacci recursion tree (and DAG) are frequently used to showcase the basic idea of recursion, its inefficiency, and the linkage to Dynamic Programming topic.
The value of Fibonacci(n) grows very fast and the Recursion Tree also grows exponentially, i.e., at least Ω(2n/2), thus try only the small values of n ≤ 7 (to avoid crashing your web browser). It's Recursion DAG only contains O(n) vertices and thus can go to a larger n ≤ 20 (to still looks nice in this visualization).
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.The C(n, k) example computes the binomial coefficient C(n, k).
f(n, k) = 1 (if k == 0); // 1 way to take nothing out of n items
f(n, k) = 1 (if k == n); // 1 way to take everything out of n items
otherwise take the last item or skip it
f(n, k) = f(n-1, k-1) + f(n-1, k)
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.The 0-1 Knapsack example solves the 0/1 Knapsack Problem: What is the maximum value that we can get, given a knapsack that can hold a maximum weight of w, where the value of the i-th item is a1[i], the weight of the i-th item is a2[i]?
[This slide is a stub and will be expanded later].
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.In the next sub-sections, we will see example recursive algorithms that have many sub-problems (1, 2, 3, ..., a certain limit). For many of these examples, the sizes of their Recursion Trees are exponential and we will need to use Dynamic Programming to compute its Recursion DAGs instead.
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.The Coin Change example solves the Coin Change problem: Given a list of coin values in a1, what is the minimum number of coins needed to get the value v?
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.The Longest Increasing Subsequence example solves the Longest Increasing Subsequence problem: Given an array a1, how long is the Longest Increasing Subsequnce of the array?
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.The Traveling Salesman example solves the Traveling Salesman Problem on small graph: How long is the shortest path that goes from city 0, passes through every city once, and goes back again to 0? The distance between city i and city j is denoted by a1[i][j].
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.The Graph Matching problem computes the maximum number of matching on a small graph, which is given in the adjacency matrix a1.
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.In the next sub-sections, instead of visualizing the Recursion Tree of a recursive algorithm, we visualize the recursion tree of the recurrence (equation) of the time complexity of certain Divide and Conquer (D&C) algorithms.
The value computed by f(n) (the red label underneath each vertex that signifies the return value of that recursive function/that subproblem) is thus the total number of operations taken by that recursive algorithm when its problem size is n (the value drawn inside each vertex). Most textbooks will say the function name of this recurrence as T(n), but we choose not to change our default f(n) function name that is used in all other recursive algorithm visualizations. Some other textbooks (e.g., CLRS) also put the cost of each vertex only, not the cost of the entire subproblem.
PS: there is a silly sync bug whenever you switch e-Lecture slides, so
the current visualization manually.In Sorting visualization, we learn about merge sort. It's time complexity recurrences are:
f(n) = Θ(1) (if n < n0) — we usually assume that the base cases are Θ(1)
f(n) = f(n/2) + f(n/2) + c*n (otherwise)
Please ensure that you see the recursion tree of the default example (n = 16). Click
to be sure, then you can leave the background picture as it is for the next few sub-slides. You should see the initial problem size of n = 16 written inside the root vertex and its return value (total amount of work done by f(16) is 32+32+1*16 = 80). This value of f(n) is consistent throughout the recursion tree, e.g., f(8) = f(4)+f(4)+c*4 = 12+12+1*8 = 32.We see that
the height of
this recursion tree
is log2 n
as we keep
halving n by 2
until we reach
the base case
of size 1.
For n = 16, we have
16->
8->
4->
2->
1 (log2 16 = 4 steps).
PS: height of tree =
the number of edges
from root to
the deepest leaf.
As the effort done in the recursive step per subproblem of size n is c*n (the divide (trivial, Θ(1)) and the conquer (merge) operation, the Θ(n)), we will perform exactly c*n operations per each recursion level of this specific recursion tree.
The root of size (n) does c*n operations during the merge step.
The two children of the root of size (n/2) both do c*n/2, and 2*c*n/2 = c*n too.
The grandchildren level is 4*c*n/4 = c*n too.
And so on until the last level (the leaves).
As the red label underneath each vertex in this visualization reports the value of the entire subproblem (including the subtrees below), these identical costs per level are not easily seen, e.g., from root to leaf, we see 80, 2x32 = 64, 4x12 = 48, 8x4 = 32, 16x1 = 16 and may get different conclusion... However, if we discounted the values of its subproblems, we will get the same conclusion, e.g., for the root, we do 80-2x32 = 16 operations, for the children of the root, we do 2x(32-2x12) = 2x8 = 16 operations too, etc.
The number of green leaves is 2log2 n = nlog2 2 = n.
Each of these leaf does Θ(1) step, thus the total work of the last (leaf) level is also Θ(n).
The total work done by Merge sort is thus c*n per level, multiplied by the height of the recursion tree (log2 n + 1 more for the leaves), or Θ(n log n).
For this example, f(16) = 80 from 1x16 x (log2 16 + 1) = 16 x (4 + 1) = 16 x 5 = 80.
In Sorting visualization, we also learn about the non-random(ized) quick sort.
It may have a worst case behavior of O(n2) on certain kind of (trivial) instances of (nearly-) sorted input and it may have the following time complexity recurrence (with a = 1):
f(n) = Θ(1) (if n < n0) — we usually assume that the base cases are Θ(1)
f(n) = f(n-a) + f(a) + c*n (otherwise)
Note that writing the recurrence in the other direction does not matter much asymptotically, other than the recursion tree will be mirrored.
Click
We want to show that this recursion tree has f(n) = O(n2).
We see that
the height of
this recursion tree
is rather tall, i.e., n/a-1
as we only reduces n
by a per level.
Thus, we need n/a-1 steps
to reach the base case
(n = 1).
For n = 16, a = 1, we have
16->
15->
14->
...->
2->
1 (16/1 - 1 = 15 steps).
As the effort done in the recursive step per subproblem of size n is c*n (divide (the partition) operation in Θ(n); the conquer step is trivial — Θ(1)), we will perform exactly c*n operations per each recursion level.
The root of size (n) does c*n operations during the partition step.
The children of the root of size (n-a) does c*(n-1) and the other does f(a).
The grandchildren level does c*(n-2) and the other does f(a).
And so on until the last level (the leaves both does f(a)).
The total work done by Quick sort on this worst-case input is the sum of arithmetic progression series of 1+2+...+n plus a few other constant factor operations (all the f(a) are Θ(1)). This simplifies to f(n) = Θ(n2).
For recurrences of the form:
f(n) = a*f(n/b) + g(n)where a ≥ 1, b > 1, and g(n) is asymptotically positive,
we maybe able to apply the master method/theorem.
PS: In this visualization, we have to rename CLRS function names to our convention:
f(n) → g(n) and T(n) → f(n).
We compare the driving function g(n) (the amount of divide and conquer work in each recursive step of size n) with nlogba (the watershed function — also the asymptotic number of leaves of the recursion tree), if g(n) = O(nlogba-ε) for ε > 0, it means that the driving function g(n) grows polynomially slower than the watershed function nlogba (by a factor of nε), thus the watershed function nlogba will dominate and the solution of the recurrence is f(n) = θ(nlogba).
Visually, if you see the recursion tree (
to ensure that you see the correct picture) for recurrence that falls into case 1 category, the cost per level grows exponentially from root level to the leaves (in this picture, 1*4*4 = 16, 7*2*2 = 28, 49*1*1 = 49, ..., 16+28+49 = 93), and the total cost of the leaves dominates the total cost of all internal vertices.The most popular example is Strassen's algorithm for matrix multiplication where case 1 of master theorem is applicable. The recurrence is: f(n) = 7*f(n/2) + c*n*n.
Thus a = 7, b = 2, watershed = nlog2 7 = n2.807, driving = f(n) = Θ(n2).
n2 = O(n2.807-ε) for ε = 0.807... — case 1 — Thus f(n) = Θ(n2.807)
Exercise: You can try changing the demo code by setting a = 8 and set g(n) from c*n*n to c*1 to change the recurrence of Strassen's algorithm to the recurrence of the simple recursive matrix multiplication algorithm. For this one, f(n) = Θ(n3).
The detailed analysis of the Merge sort algorithm from a few slides earlier can be simplified using master theorem, but case 2, e.g., f(n) = 2*f(n/2) + n.
Thus a = 2, b = 2, watershed = nlog2 2 = n, driving = f(n) = Θ(n).
n = Θ(n logk n) for k = 0 — case 2 — Thus f(n) = Θ(n log n).
Visually, if you see the recursion tree (
to ensure that you see the correct picture) for recurrence that falls into case 2 category, the cost per level is roughly the same, i.e., Θ(nlogba logk n) and there are about logb n levels. Most of the time, k = 0, i.e., the watershed and the driving functions have the same asymptotic growth and we claim that the solution is f(n) = Θ(nlogba logk+1 n). That's it, the solution of the recurrence that falls under case 2 is to add an extra log factor to g(n).Exercise: You can try changing the demo code by setting a = 1 and set g(n) from c*n to c*1 to change the recurrence of Merge sort algorithm to the recurrence of the binary search algorithm. For this one, f(n) = Θ(log n).
Case 3 is the opposite of Case 1, where the driving function g(n) grows polynomially faster than the watershed function nlogba. Thus the bulk of the operations is done by the driving function at the root level (but check the regularity condition too, to be elaborated below). This case 3 is actually rarely appear in real algorithms so we use an example recurrence: f(n) = 4*f(n/2) + n^3.
Thus a = 4, b = 2, watershed = nlog2 4 = n2, driving = f(n) = Θ(n3).
n^3 = Ω(n2+ε) for ε = 1 and
4*(n/2)2 ≤ c*n3 (regularity condition) for c = 1/2 — case 3 — Thus f(n) = Θ(n3).
Visually, if you see the recursion tree (
to ensure that you see the correct picture) for recurrence that falls into case 3 category, the cost per level drops exponentially from root level to the leaves (in this picture, 1*4*4*4 = 64, 4*2*2*2 = 32, 16*1*1*1 = 16, ..., 64+32+16 = 112), and the total cost of the root dominates the total cost of all other internal vertices (including the (many) leaves).You have reached the last slide. Return to 'Exploration Mode' to start exploring!
Note that if you notice any bug in this visualization or if you want to request for a new visualization feature, do not hesitate to drop an email to the project leader: Dr Steven Halim via his email address: stevenhalim at gmail dot com.
}
var a2 =
