







Remarks: By default, we show e-Lecture Mode for first time (or non logged-in) visitor.
If you are an NUS student and a repeat visitor, please login.
Sorting problem has a variety of interesting algorithmic solutions that embody many Computer Science ideas:
- Comparison versus non-comparison based strategies,
- Iterative versus Recursive implementation,
- Divide-and-Conquer paradigm (e.g., Merge Sort or Quick Sort),
- Best/Worst/Average-case Time Complexity analysis,
- Randomized Algorithms, etc.
Pro-tip 1: Since you are not logged-in, you may be a first time visitor (or not an NUS student) who are not aware of the following keyboard shortcuts to navigate this e-Lecture mode: [PageDown]/[PageUp] to go to the next/previous slide, respectively, (and if the drop-down box is highlighted, you can also use [→ or ↓/← or ↑] to do the same),and [Esc] to toggle between this e-Lecture mode and exploration mode.
- 在数组 A 中搜索特定值 v,
- 查找(静态)数组 A 中的最小/最大/第 k 个最小/最大值,
- 测试唯一性并删除数组 A 中的重复项,
- 计算特定值 v 在数组 A 中出现多少次,
- 设置数组 A 和另一个排序数组 B 之间的交集/联合,
- 寻找一个目标对 x∈A 和 y∈A,使得 x + y 等于目标 z 等。
- 计算在区间 [lo..hi] 内共计有多少个值。
Pro-tip 2: We designed this visualization and this e-Lecture mode to look good on 1366x768 resolution or larger (typical modern laptop resolution in 2021). We recommend using Google Chrome to access VisuAlgo. Go to full screen mode (F11) to enjoy this setup. However, you can use zoom-in (Ctrl +) or zoom-out (Ctrl -) to calibrate this.
The content of this interesting slide (the answer of the usually intriguing discussion point from the earlier slide) is hidden and only available for legitimate CS lecturer worldwide. This mechanism is used in the various flipped classrooms in NUS.
If you are really a CS lecturer (or an IT teacher) (outside of NUS) and are interested to know the answers, please drop an email to stevenhalim at gmail dot com (show your University staff profile/relevant proof to Steven) for Steven to manually activate this CS lecturer-only feature for you.
FAQ: This feature will NOT be given to anyone else who is not a CS lecturer.
Pro-tip 3: Other than using the typical media UI at the bottom of the page, you can also control the animation playback using keyboard shortcuts (in Exploration Mode): Spacebar to play/pause/replay the animation, ←/→ to step the animation backwards/forwards, respectively, and -/+ to decrease/increase the animation speed, respectively.
- 完全随机的
- 随机但有序的(按升序/降序排列)
- 随机但几乎有序的(按升序/降序排列)
- 随机但包含许多重复项(因此在小范围的整数内)
- 完全由你自己定义。
第二个操作是最重要的操作:通过单击“Sort”按钮来执行排序算法。
需要注意的是,你可以通过切换页面顶部的算法缩写来切换当前算法。
为了节省屏幕空间,我们将算法名称缩写为三个字符:
- 基于比较的排序算法:
- BUB - 冒泡排序,
- SEL - 选择排序,
- INS - 插入排序,
- MER - 归并排序 (递归实现),
- QUI - 快速排序 (递归实现),
- R-Q - 随机快速排序 (递归实现)
- 不基于比较的排序算法:
- COU - 计数排序,
- RAD - 基数排序
在接下来的幻灯片,我们将讨论三种基于比较的排序算法
这三种排序算法最容易实现,但不是效率最高的,因为它们的时间复杂度是O(N2)。
你应该已经了解下列知识:
-. 对数和指数, e.g., log2(1024) = 10, 210 = 1024
-. 等差数列, e.g., 1+2+3+4+…+10 = 10*11/2 = 55
-. 等比数列, e.g., 1+2+4+8+..+1024 = 1*(1-211)/(1-2) = 2047
-. 线性/二次/三次函数, e.g., f1(x) = x+2, f2(x) = x2+x-1, f3(x) = x3+2x2-x+7
-. 向上取整 (ceiling), 向下取整 (floor) , 和绝对值 (absolute) 函数, e.g., ceil(3.1) = 4, floor(3.1) = 3, abs(-7) = 7
我们计算操作数(算术,赋值,比较等等)而不是计算实际的时间。因为算法的运行时长与其所需的操作数紧密相关,所以这样能评估其效率。
可查看 SpeedTest.cpp | py | java 和评论(尤其注意如何获得计数器的值)。
知道一个算法所需的(准确)操作数之后,我们就可以说类似于“算法 X 解决大小为 n 的问题需要 2n2 + 100n 次操作”的话了。
如果已知一个操作需要时间 t,那就可以说 X 需要 (2n2 + 100n)t 个时间单位来解决大小为 n 的问题。
然而时间 t 决定于之前提到的因素,例如语言,编译器,电脑...
因此,与其拘泥于实际时间 t,我们不妨说算法 X 用了正比于 2n2 + 100n 的时间来解决大小为 n 的问题。
f(100000) = 2*1000002 + 100*100000 = 20010M。
写作算法 A 有时间复杂度 O(f(n)),其中 f(n) 是算法 A 的增长率函数。
O(2n)/指数时间 < O(n!)/同样指数时间 < ∞ (如死循环).
- 比较一对相邻元素(a,b),
- 如果两项的大小关系不正确,交换这两个数(在本例中为a > b),
- 重复步骤1和2,直到我们到达数组的末尾(最后一对是第 N-2 和 N-1 项,因为我们的数组从零开始)
- 到目前为止,最大项将在最后的位置。 然后我们将 N 减少1,并重复步骤1,直到 N = 1。
如果您想象较大的项目“冒泡”(也就是“浮动到数组的右侧”),则应该能看到到一个“气泡式”动画。
void bubbleSort(int a[], int N) { // 标准版本
for (; N > 0; --N) // N次迭代
for (int i = 0; i < N-1; ++i) // 除最后一次, O(N)
if (a[i] > a[i+1]) // 若数组元素i和i+1成非递减次序
swap(a[i], a[i+1]); // 在O(1)时间内交换
}
比较和交换需要一个以常量为界的时间,我们称之为c。
(标准)冒泡排序中有两个嵌套循环。
外循环正好运行N次迭代。 但内部循环运行变得越来越短:
- 当 i = 0,内循环进行N-1次比较和交换的迭代;
- 当 i = 1,N-2次迭代;
- 当 i = N-2 时,1次迭代;
- 当 i = N-1 时,0次迭代。
总时间= c * N *(N-1)/ 2 = O(N ^ 2)。
但是,它可以提前结束,例如, 尝试 上面显示的小型升序示例 [3,6,11,25,39],它在 O(N) 时间内结束。
改进的思路很简单:如果某次内部循环完全不交换,这意味着数组已经有序,我们可以在这个点上停止冒泡排序。
讨论:虽然它使冒泡排序在一般情况下运行得更快,但这种改进并没有改变冒泡排序的 O(N^2) 时间复杂度...为什么?
The content of this interesting slide (the answer of the usually intriguing discussion point from the earlier slide) is hidden and only available for legitimate CS lecturer worldwide. This mechanism is used in the various flipped classrooms in NUS.
If you are really a CS lecturer (or an IT teacher) (outside of NUS) and are interested to know the answers, please drop an email to stevenhalim at gmail dot com (show your University staff profile/relevant proof to Steven) for Steven to manually activate this CS lecturer-only feature for you.
FAQ: This feature will NOT be given to anyone else who is not a CS lecturer.
- 在 [L ... N-1] 范围内找出最小项目 X 的位置,
- 用第 L 项交换第 X 项,
- 将下限 L 增加1并重复步骤1直到 L = N-2。
在不失普遍性的情况下,我们也可以实现反向的选择排序:找到最大项目 Y 的位置并将其与末项交换。
void selectionSort(int a[], int N) {
for (int L = 0; L <= N-2; L++) { // O(N)
找到[L,N-1]区间的最小的数的序号X // O(N)
swap(X, L) // O(1), 注意X可能等于L(没有交换)
}
}
复杂度: O(N2) — 实际上,这和冒泡排序很像。
Quiz: How many (real) swaps are required to sort [29, 10, 14, 37, 13] by Selection Sort?
插入排序类似于大多数人安排扑克牌的方式。
- 从你手中的一张牌开始,
- 选择下一张牌并将其插入到正确的排序顺序中,
- 对所有牌重复上一步。
让我们在小的实力数组 [40,13,20,8] 上尝试
。void insertionSort(int a[], int N) { //插入排序
for (int i = 1; i < N; i++) { // O(N)
X = a[i]; // X 是要被插入的那个数
for (j = i-1; j >= 0 && a[j] > X; j--) // 速度有快有慢
a[j+1] = a[j]; // 为 X 腾出一个空间
a[j+1] = X; // 此处为插入点
}
}
但内循环执行的次数取决于输入:
- 在最好的情况下,数组已经排序并且 (a [j]> X) 总是为假。所以不需要移位数据,并且内部循环在O(1)时间内运行,
- 在最坏的情况下,数组被反向排序并且 (a [j]> X) 总是为真。插入始终发生在数组的前端,并且内部循环在O(N)时间内运行。
Quiz: What is the complexity of Insertion Sort on any input array?
如果你对这个还不是很清楚,请教你的指导员或者在这个页面阅读相关的评述。
我们将在接下来的几张幻灯片中讨论两种基于比较的排序算法:
PS:尽管如此,非随机版本的快速排序(Quick Sort)需要 O(N2) 时间来运行。
- 将每对单个元素(默认情况下,已排序)归并为2个元素的有序数组,
- 将2个元素的每对有序数组归并成4个元素的有序数组,重复这个过程......,
- 最后一步:归并2个N / 2元素的排序数组(为了简化讨论,我们假设N是偶数)以获得完全排序的N个元素数组。
给定两个大小分别为 N1 和 N2 的排序数组 A 和 B,我们可以在O(N) 时间内将它们有效地归并成一个大小为 N = N1 + N2的组合排序数组。
这是通过简单地比较两个数组的首项并始终取两个中较小的一个来实现的。 但是,这个简单但快速的O(N)合并子程序将需要额外的数组来正确合并。
void merge(int a[], int low, int mid, int high) {
// subarray1 = a[low..mid], subarray2 = a[mid+1..high], both sorted
int N = high-low+1;
int b[N]; // 讨论: 为什么我们需要一个临时的数组 b?
int left = low, right = mid+1, bIdx = 0;
while (left <= mid && right <= high) // 归并
b[bIdx++] = (a[left] <= a[right]) ? a[left++] : a[right++];
while (left <= mid) b[bIdx++] = a[left++]; // leftover, if any
while (right <= high) b[bIdx++] = a[right++]; // leftover, if any
for (int k = 0; k < N; k++) a[low+k] = b[k]; // copy back
}
在示例数组 [1, 5, 19, 20, 2, 11, 15, 17] 上尝试
。注意其前半部分已经排序 [1, 5, 19, 20],其下半部分也已经排序[2, 11, 15, 17],我们只需关注合并排序算法的最后一次合并。分而治之算法通过以下步骤解决(某种类型的)问题 - 比如我们的排序问题:
- 划分步骤:将大的原始问题划分成较小的子问题并递归地解决较小的子问题,
- 解决步骤:结合较小的子问题的结果来解决较大的原始问题。
归并并排序是分而治之的排序算法。
划分步骤很简单:将当前数组分成两半(如果 N 是偶数,则将其完全平等,或者如果 N 是奇数,则一边多一项),然后递归地对这两半进行排序。
解决步骤是主要的工作:使用前面讨论的归并子程序合并两个(排序的)半部分以形成一个有序数组。
void mergeSort(int a[], int low, int high) {
// 要排序的数组是 a[low..high]
if (low < high) { // 基础情况: low >= high (0或1项)
int mid = (low+high) / 2;
mergeSort(a, low , mid ); // 分成两半
mergeSort(a, mid+1, high); // 递归地将它们排序
merge(a, low, mid, high); // 解决步骤: 归并子程序
}
}
就是这样,在示例数组 [7, 2, 6, 3, 8, 4, 5] 上尝试 ,它将递归计算 [7, 2, 6, 3],然后是 [7, 2],然后是 [7](单个元素,默认排序),回溯,递归到 [2](默认排序),回溯,然后在继续处理 [6, 3] 等等之前,将 [7, 2] 合并到 [2, 7] 中。
当我们调用 merge(a, low, mid, high)的时候,我们处理 k = high - low + 1 项。 最多会有 k-1 次比较。 从原始数组 a 到临时数组 b 有 k 次移动,而又有 k 次从 b 移回 a。 总的来说,归并子例程内的操作次数 < 3k-1 = O(k)。
重要的问题是这个 merge(归并)子程序被调用了多少次?

无论输入的原始顺序如何,归并排序中最重要的部分是其O(N log N)性能保证。 就这样,没有任何针对性的测试用例可以使归并排序对于N个元素数组运行比O(N log N)更长的时间。
因此,归并排序非常适合大数组的排序,因为O(N log N)比前面讨论的O(N2)排序算法增长得慢得多。
然而,归并排序有几个不太好的部分。 首先,从零开始实现并不容易(虽然我们不必这样做)。 其次,它在归并操作期间需要额外的O(N)内存,因此会占用额外内存并且不是就地的 (in-place)。顺便说一句,如果你有兴趣看看为解决这些(经典)归并排序不那么好的部分做了什么,你可以阅读这个。
归并排序也是一个稳定 (stable) 的排序算法。 讨论:为什么?
The content of this interesting slide (the answer of the usually intriguing discussion point from the earlier slide) is hidden and only available for legitimate CS lecturer worldwide. This mechanism is used in the various flipped classrooms in NUS.
If you are really a CS lecturer (or an IT teacher) (outside of NUS) and are interested to know the answers, please drop an email to stevenhalim at gmail dot com (show your University staff profile/relevant proof to Steven) for Steven to manually activate this CS lecturer-only feature for you.
FAQ: This feature will NOT be given to anyone else who is not a CS lecturer.
快速排序是另一个分而治之的排序算法(另一个在这个可视化页面中讨论的是归并排序)。
在继续随机化和可用版本的快速排序之前,我们将看到,这种确定性的,非随机化的快速排序的版本可能在针对性输入中具有很差的 O(N2) 时间复杂度。
划分步骤:选择一个项目 p(称为pivot)然后将项目 a[i..j] 分为三部分:a [i..m-1],a [m] 和 a[m + 1..j]。 a [i..m-1](可能为空)包含小于等于 p 的项目。 a [m] 是pivot p,例如:索引 m 是已排序数组 a 的排序顺序中 p 的正确位置。 a [m + 1..j](可能为空)包含大于等于 p 的项目。 然后,递归地对这两部分进行排序。
解决步骤:不要惊讶......我们什么都不做!
如果您将其与归并排序进行比较,您会发现快速排序的划分步骤和解决步骤与归并排序完全相反。
要将 a[i..j] 分区,我们首先选择 a[i] 作为pivot p。
将其余项目(即a[i+1..j])分为3个区域:
- S1 = a [i + 1..m],其中项目 ≤ p,
- S2 = a [m + 1..k-1],其中项目 ≥ p,且
- 未知的= a [k..j],其中项目尚未分配给 S1 或 S2。
The content of this interesting slide (the answer of the usually intriguing discussion point from the earlier slide) is hidden and only available for legitimate CS lecturer worldwide. This mechanism is used in the various flipped classrooms in NUS.
If you are really a CS lecturer (or an IT teacher) (outside of NUS) and are interested to know the answers, please drop an email to stevenhalim at gmail dot com (show your University staff profile/relevant proof to Steven) for Steven to manually activate this CS lecturer-only feature for you.
FAQ: This feature will NOT be given to anyone else who is not a CS lecturer.
然后,对于未知区域中的每个项目 a [k],我们将 a[k] 与 p 进行比较, 并决定两种情况中的一个:
- 如果 a[k] > p,将 a[k] 放入 S2 中
- 如果 a[k] < p,将 a[k] 放入 S1 中
- 如果 a[k] == p,扔硬币决定将 a[k] 放入 S2 还是 S1 中。
最后,我们交换 a[i] 和 a[m] 来将枢纽 p 放在 S1 和 S2 的中间。
![Case when a[k] ≥ p, increment k, extend S2 by 1 item](https://visualgo.net/img/partition1.png)
![Case when a[k] < p, increment m, swap a[k] with a[m], increment k, extend S1 by 1 item](https://visualgo.net/img/partition2.png)
int partition(int a[], int i, int j) {
int p = a[i]; // p 是 pivot
int m = i; // S1 和 S2 一开始是空的
for (int k = i+1; k <= j; k++) { // 探索未知的区域
if (a[k] < p) { // case 2
m++;
swap(a[k], a[m]); // C++ STL algorithm std::swap
} // 注意:在情况1的时候我们什么都不做: a[k] >= p
}
swap(a[i], a[m]); // 最后一步, 用a[m]交换 pivot
return m; // 返回pivot的索引, 用于快速排序(Quick Sort)
}
void quickSort(int a[], int low, int high) {
if (low < high) {
int m = partition(a, low, high); // O(N)
// a[low..high] ~> a[low..m–1], pivot, a[m+1..high]
quickSort(a, low, m-1); // 递归地将左侧子数列排序
// a[m] = pivot 在分区后就被排序好了
quickSort(a, m+1, high); // 然后将右侧子数列排序
}
}
在此之后,a[2] = 27 保证被排序,现在快速排序递归地将左侧排序为 a[0..1],然后递归地将右侧排序为 a[3..5]。
首先,我们分析一次分区的成本。
在函数 partition(a, i, j) 中,只有一个遍历了 j-i 次的 for 循环。 由于 j 可以和 N-1 一样大,i 可以低至0,所以分区的时间复杂度是O(N)。
类似于归并排序分析,快速排序的时间复杂度取决于 partition(a, i, j) 被调用的次数。
在这种最坏情况的输入场景中,会发生以下情况:

这是经典的 N +(N-1)+(N-2)+ ... + 1 模式,它是 O(N2) 的,类似于冒泡排序幻灯片中的分析......
当分区总是将数组分成两个相等的一半时,就会发生快速排序的最佳情况,如归并排序。
当发生这种情况时,递归的深度只有 O(log N)。
由于每个级别进行 O(N) 比较,时间复杂度为 O(N log N)。
在这个手工制作的示例输入数组 [4, 1, 3, 2, 6, 5, 7] 上尝试 。这在实践中很少见,因此我们需要设计一个更好的方法:随机快速排序。
在这个电子讲座中,我们将假设它是正确的。
如果你需要非正式的解释:想象一下在随机化快速排序中,随机化数据透视选择,我们不会总是得到0(空),1(pivot)和N-1个其他项目的非常差的分割。 有幸运的情况(一半在pivot左边,一半在pivot右边),有不太幸运的情况,也有非常差的情况(空-pivot-其它)。这三种组合产生的平均时间复杂度为O(N log N)。
The content of this interesting slide (the answer of the usually intriguing discussion point from the earlier slide) is hidden and only available for legitimate CS lecturer worldwide. This mechanism is used in the various flipped classrooms in NUS.
If you are really a CS lecturer (or an IT teacher) (outside of NUS) and are interested to know the answers, please drop an email to stevenhalim at gmail dot com (show your University staff profile/relevant proof to Steven) for Steven to manually activate this CS lecturer-only feature for you.
FAQ: This feature will NOT be given to anyone else who is not a CS lecturer.
It is known (also not proven in this visualization as it will take about half-an-hour lecture about decision tree model to do so) that all comparison-based sorting algorithms have a lower bound time complexity of Ω(N log N).
Thus, any comparison-based sorting algorithm with worst-case complexity O(N log N), like Merge Sort is considered an optimal algorithm, i.e., we cannot do better than that.
However, we can achieve faster sorting algorithm — i.e., in O(N) — if certain assumptions of the input array exist and thus we can avoid comparing the items to determine the sorted order.
Assumption: If the items to be sorted are Integers with small range, we can count the frequency of occurrence of each Integer (in that small range) and then loop through that small range to output the items in sorted order.
Try
on the example array above where all Integers are within [1..9], thus we just need to count how many times Integer 1 appears, Integer 2 appears, ..., Integer 9 appears, and then loop through 1 to 9 to print out x copies of Integer y if frequency[y] = x.The time complexity is O(N) to count the frequencies and O(N+k) to print out the output in sorted order where k is the range of the input Integers, which is 9-1+1 = 9 in this example. The time complexity of Counting Sort is thus O(N+k), which is O(N) if k is small.
We will not be able to do the counting part of Counting Sort when k is relatively big due to memory limitation, as we need to store frequencies of those k integers.
PS: This version of Counting Sort is not stable, as it does not actually remember the (input) ordering of duplicate integers. The version presented in CLRS is stable, but is a bit more complex than this form.
Assumption: If the items to be sorted are Integers with large range but of few digits, we can combine Counting Sort idea with Radix Sort to achieve the linear time complexity.
In Radix Sort, we treat each item to be sorted as a string of w digits (we pad Integers that have less than w digits with leading zeroes if necessary).
For the least significant (rightmost) digit to the most significant digit (leftmost), we pass through the N items and put them according to the active digit into 10 Queues (one for each digit [0..9]), which is like a modified Counting Sort as this one preserves stability (remember, the Counting Sort version shown in this slide earlier is not a stable sort). Then we re-concatenate the groups again for subsequent iteration.
Try
on the random 4-digits array above for clearer explanation.Notice that we only perform O(w × (N+k)) iterations. In this example, w = 4 and k = 10.
Now, having discussed about Radix Sort, should we use it for every sorting situation?
For example, it should be theoretically faster to sort many (N is very large) 32-bit signed integers as w ≤ 10 digits and k = 10 if we interpret those 32-bit signed integers in Decimal. O(10 × (N+10)) = O(N).
Discussion: Using base-10 as shown in this visualization is actually not the best way to sort N 32-bit signed integers. What should be the better setup?
The content of this interesting slide (the answer of the usually intriguing discussion point from the earlier slide) is hidden and only available for legitimate CS lecturer worldwide. This mechanism is used in the various flipped classrooms in NUS.
If you are really a CS lecturer (or an IT teacher) (outside of NUS) and are interested to know the answers, please drop an email to stevenhalim at gmail dot com (show your University staff profile/relevant proof to Steven) for Steven to manually activate this CS lecturer-only feature for you.
FAQ: This feature will NOT be given to anyone else who is not a CS lecturer.
在本节中,我们将讨论就地(in-place)与非就地,稳定 (stable) 与不稳定以及高速缓存的排序算法的性能。
如果排序算法在排序过程中仅需要恒定量(如 O(1))的额外空间,则称其为就地 (in-place) 排序算法。就是这样,常数的少量额外变量是可以的,但我们不允许具有取决于输入数组长度 N 的可变长度的变量。
归并排序(经典版),由于其合并子程序需要额外的大小为 N 的临时数组,因此不是就地的。
讨论:冒泡排序,选择排序,插入排序,快速排序(是否随机),计数排序和基数排序如何?哪些是就地的?
A sorting algorithm is called stable if the relative order of elements with the same key value is preserved by the algorithm after sorting is performed.
Example application of stable sort: Assume that we have student names that have been sorted in alphabetical order. Now, if this list is sorted again by tutorial group number (recall that one tutorial group usually has many students), a stable sort algorithm would ensure that all students in the same tutorial group still appear in alphabetical order of their names. Radix sort that goes through multiple round of sorts digit-by-digit requires a stable sort sub-routine for it to work correctly.
Discussion: Which of the sorting algorithms discussed in this e-Lecture are stable?
Try sorting array A = {3, 4a, 2, 4b, 1}, i.e. there are two copies of 4 (4a first, then 4b).
The content of this interesting slide (the answer of the usually intriguing discussion point from the earlier slide) is hidden and only available for legitimate CS lecturer worldwide. This mechanism is used in the various flipped classrooms in NUS.
If you are really a CS lecturer (or an IT teacher) (outside of NUS) and are interested to know the answers, please drop an email to stevenhalim at gmail dot com (show your University staff profile/relevant proof to Steven) for Steven to manually activate this CS lecturer-only feature for you.
FAQ: This feature will NOT be given to anyone else who is not a CS lecturer.
Quiz: Which of these algorithms run in O(N log N) on any input array of size N?
Quiz: Which of these algorithms has worst case time complexity of Θ(N^2) for sorting N integers?
Selection SortMerge Sort
Insertion Sort
Bubble Sort
Radix Sort
The content of this interesting slide (the answer of the usually intriguing discussion point from the earlier slide) is hidden and only available for legitimate CS lecturer worldwide. This mechanism is used in the various flipped classrooms in NUS.
If you are really a CS lecturer (or an IT teacher) (outside of NUS) and are interested to know the answers, please drop an email to stevenhalim at gmail dot com (show your University staff profile/relevant proof to Steven) for Steven to manually activate this CS lecturer-only feature for you.
FAQ: This feature will NOT be given to anyone else who is not a CS lecturer.
The content of this interesting slide (the answer of the usually intriguing discussion point from the earlier slide) is hidden and only available for legitimate CS lecturer worldwide. This mechanism is used in the various flipped classrooms in NUS.
If you are really a CS lecturer (or an IT teacher) (outside of NUS) and are interested to know the answers, please drop an email to stevenhalim at gmail dot com (show your University staff profile/relevant proof to Steven) for Steven to manually activate this CS lecturer-only feature for you.
FAQ: This feature will NOT be given to anyone else who is not a CS lecturer.
实际上,许多这些基本排序算法的C ++源代码已经分散在这些电子讲座幻灯片中。对于其他编程语言,您可以将给定的C ++源代码翻译为其他编程语言。
通常,排序只是问题解决过程中的一小部分,而现在大多数编程语言都有自己的排序函数,所以非必要情况,我们不需要复现其代码。
在C++中,您可以使用STL算法中的std::sort,std::stable_sort 或 std::partial_sort。
在Java中,您可以使用Collections.sort。
在Python中,您可以使用sort。
在OCaml中,您可以使用List.sort compare list_name。
如果比较函数是针对特定问题的,我们可能需要为这些内置的排序例程提供额外的比较函数。
现在您已经到了电子讲座的最后部分,您认为排序问题与调用内置的排序程序一样简单吗?
尝试这些在线评判问题以了解更多信息:
Kattis - mjehuric
Kattis - sortofsorting,或者
Kattis - sidewayssorting
这不是排序算法的结束。 当您在 VisuAlgo 中探索其他主题时,您会意识到排序是许多其他高难度问题的高级算法的预处理步骤,例如, 作为克鲁斯卡尔算法(Kruskal's algorithm)的预处理步骤,创造性地用于后缀数组 (Suffix Array) 数据结构等。
You have reached the last slide. Return to 'Exploration Mode' to start exploring!
Note that if you notice any bug in this visualization or if you want to request for a new visualization feature, do not hesitate to drop an email to the project leader: Dr Steven Halim via his email address: stevenhalim at gmail dot com.
Change Scale
创建(A)
排序