User:Doraencyclopedia/sandbox/4

排序演算法是一種能將一串資料依照特定排序方式進行排列的演算法。排序最常見的應用是數值順序以及字典順序。有效的排序演算法在一些演算法(例如搜尋和合併演算法)中很重要,因為這些演算法必須先將資料排好才能正確完成或是進行優化。排序演算法也用在處理文字資料以及產生人類可讀的輸出結果。基本上,排序演算法的輸出必須遵守下列兩個原則:

  1. 輸出結果為非遞減序列(根據全序關係,每個元素不小於之前的元素)
  2. 輸出結果是原輸入的一種排列(重組)

此外,輸入資料的類型通常為陣列,因此可以隨機存取,不像List只能循序存取,雖然演算法通常可以透過修改以供其他類型的數據輸入之用。

隨著電腦運算的發展,排序問題引來大量的研究,儘管這個問題並不難,但是解決它可以大大的提高效率,例如,氣泡排序早在1956年就已經被研究。[1]比較排序的限制是,在最壞及平均情況下至少需要O(n log n)的時間,如果輸入資料是真實世界或接近排序的資料,有可能可以達到更好的效率,或是非比較排序(如計數排序),也可以突破限制。雖然大部分人認為這是一個已經被解決的問題(在20世紀中期就已經發展出漸近最優的排序演算法),有用的新演算法仍在不斷的被發明,如2002年發明,許多程式語言使用的Tim sort,和在2006年發明的圖書館排序

計算機科學的課程中,排序演算法常常被講述,也提供了許多演算法的核心概念,如大O記號分治法資料結構中的二元樹隨機演算法時間複雜度的分析、時空權衡、還有上界和下界

分類

排序演算法的分類和屬性如下:

  • 依照計算的複雜度(最差、平均、和最好的效率)的快慢來分類,其中複雜度是以資料長度n的函數來表示的。一般而言,O(n log n)和並行排序的O(log2 n)是較快的,而O(n2)是較慢的(參見大O符號)。理想的排序演算法應該達到O(n)的效率,但在比較排序中,平均效率不可能達到O(n),而並行排序平均效率在優化後可以達到O(log n)。比較排序,也就是僅使用一個抽象關鍵比較運算的排序算法,對於大多數的輸入來說,至少需要O(n log n)的時間排序資料。
  • 對於原地排序來說,交換操作的複雜度
  • 記憶體使用量(以及其他電腦資源的使用)。有些排序是原地演算法,通常被歸為原地演算法的排序,途中只需要O(1)的額外空間,但有時使用O(log n)額外空間的排序也被認為是原地演算法。
  • 遞迴屬性:一個排序通常可以被歸類為遞迴或非遞迴,但少數排序可以同時實現這兩種方法(例如合併排序)。
  • 穩定性:穩定排序演算法會保持同鍵值紀錄的相對次序。
  • 比較和非比較排序:比較排序只會透過比較兩個元素的大小來排序資料。
  • 依據排序的方法:插入、交換、選擇、合併等等。交換類排序包括氣泡排序和快速排序,選擇類排序包括選擇排序和堆排序。之後對於排序演算法的討論幾乎集中在串行排序。
  • 自適應屬性:如果輸入資料已事先大致排序,是否會影響運行時間的效率。參見自適應排序

穩定性

 
這張圖用撲克牌來說明穩定及不穩定的差別。在穩定的排序中,兩張5會以原本的相對次序排列,而在不穩定的排序中,兩張5的相對次序可能會被倒過來。

When sorting some kinds of data, only part of the data is examined when determining the sort order. For example, in the card sorting example to the right, the cards are being sorted by their rank, and their suit is being ignored. This allows the possibility of multiple different correctly sorted versions of the original list. Stable sorting algorithms choose one of these, according to the following rule: if two items compare as equal, like the two 5 cards, then their relative order will be preserved, so that if one came before the other in the input, it will also come before the other in the output.

More formally, the data being sorted can be represented as a record or tuple of values, and the part of the data that is used for sorting is called the key. In the card example, cards are represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable if whenever there are two records R and S with the same key, and R appears before S in the original list, then R will always appear before S in the sorted list.

When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. Stability is also not an issue if all keys are different.

Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison, so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original input list as a tie-breaker. Remembering this order, however, may require additional time and space.

One application for stable sorting algorithms is sorting a list using a primary and secondary key. For example, suppose we wish to sort a hand of cards such that the suits are in the order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are sorted by rank. This can be done by first sorting the cards by rank (using any sort), and then doing a stable sort by suit:

 

Within each suit, the stable sort preserves the ordering by rank that was already done. This idea can be extended to any number of keys, and is leveraged by radix sort. The same effect can be achieved with an unstable sort by using a lexicographic key comparison, which e.g. compares first by suits, and then compares by rank if the suits are the same.

排序演算法間的比較

在這個段落,n是資料的個數。The columns "Average" and "Worst" give the time complexity in each case, under the assumption that the length of each key is constant, and that therefore all comparisons, swaps, and other needed operations can proceed in constant time. "Memory" denotes the amount of auxiliary storage needed beyond that used by the list itself, under the same assumption. The run times and the memory requirements listed below should be understood to be inside big O notation. Logarithms are of any base; the notation   means  .

下表中所列的都屬於比較排序,所以平均和最壞情況皆無法好過O(n log n)

比較排序
名稱 最好情況 平均情況 最壞情況 空間複雜度 穩定性 排序方法 備註
快速排序         on average, worst case is  ; Sedgewick variation is   worst case typical in-place sort is not stable; stable versions exist 分割 Quicksort is usually done in place with O(log n) stack space.[來源請求] Most implementations are unstable, as stable in-place partitioning is more complex. Naïve variants use an O(n) space array to store the partition.[來源請求] Quicksort variant using three-way (fat) partitioning takes O(n) comparisons when sorting an array of equal keys.
合併排序         worst case Yes 合併 Highly parallelizable (up to O(log n) using the Three Hungarian's Algorithm[需要解释] or, more practically, Cole's parallel merge sort) for processing large amounts of data.
原地合併排序     Yes 合併 Can be implemented as a stable sort based on stable in-place merging.[2]
堆排序         No 選擇
插入排序         Yes 插入 O(n + d),[需要解释] where d is the number of inversions.
內省排序         No 分割和選擇 Used in several STL implementations.
選擇排序         No 選擇 Stable with O(n) extra space, for example using lists.[3]
Tim排序         Yes 插入和合併 Makes n comparisons when the data is already sorted or reverse sorted.
希爾排序    
or
 
Depends on gap sequence;
best known is  
  No 插入 Small code size, no use of call stack, reasonably fast, useful where memory is at a premium such as embedded and older mainframe applications.
氣泡排序         Yes 交換 Tiny code size.
二元搜尋樹排序         Yes 插入 When using a self-balancing binary search tree.
圈排序       No 插入 In-place with theoretically optimal number of writes.
圖書館排序       Yes 插入
耐心排序     No 插入和選擇 Finds all the longest increasing subsequences in O(n log n).
平滑排序         No 選擇 An adaptive sort:   comparisons when the data is already sorted, and 0 swaps.
串列排序         Yes 選擇
錦標賽排序      [4] ? 選擇
雞尾酒排序         Yes 交換
梳排序         No 交換 Small code size.
侏儒排序         Yes 交換 Tiny code size.
反移排序[5]       In place for linked lists. N*sizeof(link) for array. Can be made stable by appending the input order to the key. 分配和合併 No exchanges are performed. Performance is independent of data size. The constant 'k' is proportional to the entropy in the input. K = 1 for ordered or ordered by reversed input so runtime is equivalent to checking the order O(N).
弗朗西的方法[6]       Yes ?
區塊排序         Yes 插入和合併 Combine a block-based O(n) in-place merge algorithm[7] with a bottom-up merge sort.
Bogo排序         No 隨機洗牌

The following table describes integer sorting algorithms and other sorting algorithms that are not comparison sorts. As such, they are not limited by a   lower bound. Complexities below assume n items to be sorted, with keys of size k, digit size d, and r the range of numbers to be sorted. Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that n << 2k, where << means "much less than."

非比較排序
Name Best Average Worst Memory Stable n << 2k Notes
鴿巢排序       Yes Yes
桶排序 (uniform keys)       Yes No Assumes uniform distribution of elements from the domain in the array.[8]
桶排序 (integer keys)       Yes Yes If r is O(n), then Average is O(n).[9]
計數排序       Yes Yes If r is O(n), then Average is O(n).[8]
LSD基數排序       Yes No [8][9]
MSD基數排序       Yes No Stable version uses an external array of size n to hold all of the bins.
MSD基數排序 (in-place)       No No   recursion levels, 2d for count array.
Spread排序       No No Asymptotics are based on the assumption that n << 2k, but the algorithm does not require this.

The following table describes some sorting algorithms that are impractical for real-life use due to extremely poor performance or specialized hardware requirements.

Name Best Average Worst Memory Stable Comparison Other notes
珠排序 N/A N/A N/A No Requires specialized hardware.
簡單煎餅排序       No Yes Count is number of flips.
意粉(投票)排序         Yes Polling This a linear-time, analog algorithm for sorting a sequence of items, requiring O(n) stack space, and the sort is stable. This requires n parallel processors. See Spaghetti sort#Analysis.
排序網路         Varies (stable sorting networks require more comparisons) Yes Order of comparisons are set in advance based on a fixed network size. Impractical for more than 32 items.

Theoretical computer scientists have detailed other sorting algorithms that provide better than O(n log n) time complexity assuming additional constraints, including:

  • Han's algorithm, a deterministic algorithm for sorting keys from a domain of finite size, taking O(n log log n) time and O(n) space.[10]
  • Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite size, taking O(n log log n) time and O(n) space.[11]
  • A randomized integer sorting algorithm taking   expected time and O(n) space.[12]

Algorithms not yet compared above include:

常見的排序演算法

While there are a large number of sorting algorithms, in practical implementations a few algorithms predominate. Insertion sort is widely used for small data sets, while for large data sets an asymptotically efficient sort is used, primarily heap sort, merge sort, or quicksort. Efficient implementations generally use a hybrid algorithm, combining an asymptotically efficient algorithm for the overall sort with insertion sort for small lists at the bottom of a recursion. Highly tuned implementations use more sophisticated variants, such as Timsort (merge sort, insertion sort, and additional logic), used in Android, Java, and Python, and introsort (quicksort and heap sort), used (in variant forms) in some C++ sort implementations and in .NET.

For more restricted data, such as numbers in a fixed interval, distribution sorts such as counting sort or radix sort are widely used. Bubble sort and variants are rarely used in practice, but are commonly found in teaching and theoretical discussions.

When physically sorting objects, such as alphabetizing papers (such as tests or books), people intuitively generally use insertion sorts for small sets. For larger sets, people often first bucket, such as by initial letter, and multiple bucketing allows practical sorting of very large sets. Often space is relatively cheap, such as by spreading objects out on the floor or over a large area, but operations are expensive, particularly moving an object a large distance – locality of reference is important. Merge sorts are also practical for physical objects, particularly as two hands can be used, one for each list to merge, while other algorithms, such as heap sort or quick sort, are poorly suited for human use. Other algorithms, such as library sort, a variant of insertion sort that leaves spaces, are also practical for physical use.

簡單的排序

Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on small data, due to low overhead, but not efficient on large data. Insertion sort is generally faster than selection sort in practice, due to fewer comparisons and good performance on almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes, and thus is used when write performance is a limiting factor.

插入排序

Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists.

選擇排序

Selection sort is an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and also has performance advantages over more complicated algorithms in certain situations.

The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps, and thus is useful where swapping is very expensive.

效率好的排序

Practical general sorting algorithms are almost always based on an algorithm with average complexity (and generally worst-case complexity) O(n log n), of which the most common are heap sort, merge sort, and quicksort. Each has advantages and drawbacks, with the most significant being that simple implementation of merge sort uses O(n) additional space, and simple implementation of quicksort has O(n2) worst-case complexity. These problems can be solved or ameliorated at the cost of a more complex algorithm.

While these algorithms are asymptotically efficient on random data, for practical efficiency on real-world data various modifications are used. First, the overhead of these algorithms becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching to insertion sort once the data is small enough. Second, the algorithms often perform poorly on already sorted data or almost sorted data – these are common in real-world data, and can be sorted in O(n) time by appropriate algorithms. Finally, they may also be unstable, and stability is often a desirable property in a sort. Thus more sophisticated algorithms are often employed, such as Timsort (based on merge sort) or introsort (based on quicksort, falling back to heap sort).

合併排序

Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n). It is also easily applied to lists, not only arrays, as it only requires sequential access, not random access. However, it has additional O(n) space complexity, and involves a large number of copies in simple implementations.

Merge sort has seen a relatively recent surge in popularity for practical implementations, due to its use in the sophisticated algorithm Timsort, which is used for the standard sort routine in the programming languages Python[13] and Java (as of JDK7[14]). Merge sort itself is the standard routine in Perl,[15] among others, and has been used in Java at least since 2000 in JDK1.3.[16][17]

堆排序

Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest (or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst case complexity.

快速排序

Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array an element called a pivot is selected. All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time and in-place. The lesser and greater sublists are then recursively sorted. This yields average time complexity of O(n log n), with low overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, quicksort is one of the most popular sorting algorithms and is available in many standard programming libraries.

The important caveat about quicksort is that its worst-case performance is O(n2); while this is rare, in naive implementations (choosing the first or last element as pivot) this occurs for sorted data, which is a common case. The most complex issue in quicksort is thus choosing a good pivot element, as consistently poor choices of pivots can result in drastically slower O(n2) performance, but good choice of pivots yields O(n log n) performance, which is asymptotically optimal. For example, if at each step the median is chosen as the pivot then the algorithm works in O(n log n). Finding the median, such as by the median of medians selection algorithm is however an O(n) operation on unsorted lists and therefore exacts significant overhead with sorting. In practice choosing a random pivot almost certainly yields O(n log n) performance.

氣泡排序和其變種

Bubble sort, and variants such as the cocktail sort, are simple but highly inefficient sorts. They are thus frequently seen in introductory texts, and are of some theoretical interest due to ease of analysis, but they are rarely used in practice, and primarily of recreational interest. Some variants, such as the Shell sort, have open questions about their behavior.

氣泡排序

 
A bubble sort, a sorting algorithm that continuously steps through a list, swapping items until they appear in the correct order.

Bubble sort is a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. This algorithm's average and worst-case performance is O(n2), so it is rarely used to sort large, unordered data sets. Bubble sort can be used to sort a small number of items (where its asymptotic inefficiency is not a high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place). For example, if any number of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble sort's exchange will get them in order on the first pass, the second pass will find all elements in order, so the sort will take only 2n time.

希爾排序

 
A Shell sort, different from bubble sort in that it moves elements to numerous swapping positions

Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort.

梳排序

Comb sort is a relatively simple sorting algorithm originally designed by Wlodzimierz Dobosiewicz in 1980.[18] Later it was rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine article published in April 1991. Comb sort improves on bubble sort. The basic idea is to eliminate turtles, or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. (Rabbits, large values around the beginning of the list, do not pose a problem in bubble sort)

分配排序

Distribution sort refers to any sorting algorithm where data are distributed from their input to multiple intermediate structures which are then gathered and placed on the output. For example, both bucket sort and flashsort are distribution based sorting algorithms. Distribution sorting algorithms can be used on a single processor, or they can be a distributed algorithm, where individual subsets are separately sorted on different processors, then combined. This allows external sorting of data too large to fit into a single computer's memory.

計數排序

Counting sort is applicable when each input is known to belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and O(|S|) memory where n is the length of the input. It works by creating an integer array of size |S| and using the ith bin to count the occurrences of the ith member of S in the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order. This sorting algorithm cannot often be used because S needs to be reasonably small for it to be efficient, but the algorithm is extremely fast and demonstrates great asymptotic behavior as n increases. It also can be modified to provide stable behavior.

桶排序

Bucket sort is a divide and conquer sorting algorithm that generalizes Counting sort by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm.

Due to the fact that bucket sort must use a limited number of buckets it is best suited to be used on data sets of a limited scope. Bucket sort would be unsuitable for data that have a lot of variation, such as social security numbers.

基數排序

Radix sort is an algorithm that sorts numbers by processing individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix sort can process digits of each number either starting from the least significant digit (LSD) or starting from the most significant digit (MSD). The LSD algorithm first sorts the list by the least significant digit while preserving their relative order using a stable sort. Then it sorts them by the next digit, and so on from the least significant to the most significant, ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix sort is not stable. It is common for the counting sort algorithm to be used internally by the radix sort. A hybrid sorting approach, such as using insertion sort for small bins improves performance of radix sort significantly.

Memory usage patterns and index sorting

When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.

For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.

One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem. This procedure is sometimes called "tag sort".[19]

Another technique for overcoming the memory-size problem is to combine two algorithms in a way that takes advantages of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted using an efficient algorithm (such as quicksort), and the results merged using a k-way merge similar to that used in mergesort. This is faster than performing either mergesort or quicksort over the entire list.

Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.

效率很低的排序

跟上面所討論的排序相比,有些排序是很低效的,如bogo排序(平均時間複雜度為O(nn!))和臭皮匠排序(平均時間複雜度為O(n2.7))。

Related algorithms

Related problems include partial sorting (sorting only the k smallest elements of a list, or alternatively computing the k smallest elements, but unordered) and selection (computing the kth smallest element). These can be solved inefficiently by a total sort, but more efficient algorithms exist, often derived by generalizing a sorting algorithm. The most notable example is quickselect, which is related to quicksort. Conversely, some sorting algorithms can be derived by repeated application of a selection algorithm; quicksort and quickselect can be seen as the same pivoting move, differing only in whether one recurses on both sides (quicksort, divide and conquer) or one side (quickselect, decrease and conquer).

A kind of opposite of a sorting algorithm is a shuffling algorithm. These are fundamentally different because they require a source of random numbers. Interestingly, shuffling can also be implemented by a sorting algorithm, namely by a random sort: assigning a random number to each element of the list and then sorting based on the random numbers. This is generally not done in practice, however, and there is a well-known simple and efficient algorithm for shuffling: the Fisher–Yates shuffle.

參見

References

  1. ^ Demuth, H. Electronic Data Sorting. PhD thesis, Stanford University, 1956.
  2. ^ http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.8381
  3. ^ http://www.algolist.net/Algorithms/Sorting/Selection_sort
  4. ^ http://dbs.uni-leipzig.de/skripte/ADS1/PDF4/kap4.pdf
  5. ^ Kagel, Art. Unshuffle, Not Quite a Sort. Computer Language. November 1985. 
  6. ^ Franceschini, Gianni. Sorting Stably, in Place, with O(n log n) Comparisons and O(n) Moves. Theory of Computing Systems. 1 June 2007, 40 (4): 327–353. doi:10.1007/s00224-006-1311-1. 
  7. ^ Kutzner, Arne; Kim, Pok-Son. Ratio Based Stable In-Place Merging. Lecture Notes in Computer Science 4978. Springer Berlin Heidelberg. 2008: 246–257 [2014-03-14]. 
  8. ^ 8.0 8.1 8.2 Cormen, Thomas H. 英语Thomas H. Cormen; Leiserson, Charles E. 英语Charles E. Leiserson; Rivest, Ronald L.; Stein, Clifford. Introduction to Algorithms 2nd. MIT Press and McGraw-Hill. 2001 [1990]. ISBN 0-262-03293-7. 
  9. ^ 9.0 9.1 Goodrich, Michael T.; Tamassia, Roberto. 4.5 Bucket-Sort and Radix-Sort. Algorithm Design: Foundations, Analysis, and Internet Examples. John Wiley & Sons. 2002: 241–243. 
  10. ^ Y. Han. Deterministic sorting in O(n log log n) time and linear space. Proceedings of the thirty-fourth annual ACM symposium on theory of computing, Montreal, Quebec, Canada, 2002, pp. 602-608.
  11. ^ M. Thorup. Randomized Sorting in O(n log log n) Time and Linear Space Using Addition, Shift, and Bit-wise Boolean Operations. Journal of Algorithms, Volume 42, Number 2, February 2002, pp. 205-230(26).
  12. ^ Han, Y. and Thorup, M. 2002. Integer Sorting in   Expected Time and Linear Space. In Proceedings of the 43rd Symposium on Foundations of Computer Science (November 16–19, 2002). FOCS. IEEE Computer Society, Washington, DC, pp. 135-144.
  13. ^ Tim Peters's original description of timsort
  14. ^ http://hg.openjdk.java.net/jdk7/tl/jdk/rev/bfd7abda8f79
  15. ^ Perl sort documentation
  16. ^ Merge sort in Java 1.3, Sun.
  17. ^ Java 1.3 live since 2000
  18. ^ Brejová, Bronislava. "Analyzing variants of Shellsort"
  19. ^ Definition of "tag sort" according to PC Magazine

External links