CS 240: Algorithms and Data Structures
James Madison University, Spring 2023

Introduction - What is the World’s Best Sorting Algorithm?

There is no single best sorting algorithm. Quicksort is the fastest known comparison-based sorting algorithm when applied to large, unordered, sequences. It also has the advantage of being an in-place (or nearly in-place) sort. Unfortunately, quicksort has some weaknesses: it’s worst-case performance is \(\Theta(n^2)\), and it is not stable. Merge sort shares neither of these disadvantages: it is stable and it requires \(\Theta(n \log n)\) steps in the worst case. Unfortunately, merge sort requires \(\Theta(n)\) additional space and it runs more slowly than quick sort on most inputs.

What we really want is a sorting algorithm that is as fast as quicksort, stable, in-place, with \(\Theta(n \log n)\) worst-case performance. Sadly, no such algorithm has yet been discovered. In practice, most code libraries follow one of two paths: either they provide a modified version of quicksort that is able to avoid worst-case behavior on typical inputs, or they provide a modified version of merge sort that is able to close some of the performance gap through careful optimizations*.

The objective of this assignment is to write modified versions of merge and quicksort that exhibit better performance than the standard versions described in our textbook. You will implement three improvements (two for merge sort and one for quicksort) that should improve some aspect of the algorithm’s performance without requiring a great deal of additional code. You will also experimentally evaluate the impact of your improvements.

The following zip file contains starter code, including the sort implementations provided by our textbook.

sorting.zip

This zip file has the following contents:

Note that SortProfiler relies on the JOpt Simple library for handling command-line arguments. You will need to have jopt-simple-5.0.4.jar in your classpath to use the application.

Part 1 - Improved Merges

The merge algorithm described in our textbook consists of the following two stages:

1. Merge the values from the two pre-sorted halves into a temporary array.
2. Copy the values from the temporary array back into the original array.

If \(n\) is the combined size of the two sub-arrays being merged, this algorithm requires a temporary array of size \(n\), and requires \(n\) assignment operations in stage 2. The following alternative approach cuts both of those values from \(n\) to around \(n/2\):

1. Copy the values from the first pre-sorted half to a temporary array.
2. Merge the values from the temporary array and the second
   pre-sorted half into the original array.

Some ASCII graphics will help illustrate this process. Suppose these are the two sorted sub-arrays that need to be merged:

Original sorted sub-arrays:
      ________________________________________
  ...| 1 | 3 | 5 | 7 | 9 | 2 | 4 | 6 | 8 | 10 |...
      ----------------------------------------
 start-^           mid-^               end-^

Instantiate a temporary array large enough for the first sorted sub-array:
      ___________________
     |   |   |   |   |   |
      -------------------
     0-^

      ________________________________________
  ...| 1 | 3 | 5 | 7 | 9 | 2 | 4 | 6 | 8 | 10 |...
      ----------------------------------------
 start-^           mid-^               end-^

Copy the sorted first half into the temporary array:
         ___________________
        | 1 | 3 | 5 | 7 | 9 |
         -------------------

         ________________________________________
     ...|   |   |   |   |   | 2 | 4 | 6 | 8 | 10 |...
         ----------------------------------------

Initialize tmpIndex to be the index of the first position (0) in the temporary array, rightIndex to be the index of the first position of the sorted right half, and mergeIndex to be the next available position for merging:
          ___________________
         | 1 | 3 | 5 | 7 | 9 |
          -------------------
  tmpIndex-^
         ________________________________________
      ...|   |   |   |   |   | 2 | 4 | 6 | 8 | 10 |...
         ----------------------------------------
mergeIndex-^        rightIndex-^

Since temp[tmpIndex] < items[rightIndex], temp[tmpIndex] is copied to items[mergeIndex]. tmpIndex and mergeIndex are incremented.
       ___________________
      | 1 | 3 | 5 | 7 | 9 |
       -------------------
   tmpIndex-^
       ________________________________________
   ...| 1 |   |   |   |   | 2 | 4 | 6 | 8 | 10 |...
       ----------------------------------------
 mergeIndex-^    rightIndex-^

Since items[rightIndex] < temp[tmpIndex], items[rightIndex] is copied to items[mergeIndex]. rightIndex and mergeIndex are incremented.
       ___________________
      | 1 | 3 | 5 | 7 | 9 |
       -------------------
   tmpIndex-^
       ________________________________________
   ...| 1 | 2 |   |   |   | 2 | 4 | 6 | 8 | 10 |...
       ----------------------------------------
     mergeIndex-^    rightIndex-^

Continue Merging until complete:
       ___________________
      | 1 | 3 | 5 | 7 | 9 |
       -------------------
       ________________________________________
   ...| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |...
       ----------------------------------------

Use the sort profiling tool to test your finished implementation. You should not expect your improved merge sort to be dramatically faster than the provided implementation. Since array assignments are very fast relative to calls to compareTo, the speed impact will be minimal. The main advantage here is the savings in space.

Part 2 - Switching Strategies

The next improvement is based on the observation that merge sort is actually slower than simple \(\Theta(n^2)\) sorts for small input sizes. This may seem surprising given that merge sort is an \(\Theta(n \log n)\) sorting algorithm. However, it is important to keep in mind that asymptotic analysis is only concerned with rates of growth. A \(\Theta(n \log n)\) algorithm will always be faster than a \(\Theta(n^2)\) algorithm eventually, but that doesn’t mean the \(\Theta(n^2)\) algorithm can’t be faster for small inputs. The following figure was created by timing merge sort and insertion sort on small randomly ordered arrays from size 2 to size 150:

Sort timings

As you can see, insertion sort is faster until around \(n = 100\). At that point, merge sort becomes faster and it remains faster for all larger inputs.

A a reminder, the following pseudocode describes the overall logic of the merge sort Algorithm:

merge_sort(sub-array)
    If sub-array is has more than one entry: 
        Recursively merge_sort the left half
        Recursively merge_sort the right half
        Merge the two sorted halves.

This logic recursively splits the original array into smaller and smaller sub-arrays until the recursion bottoms out at sub-arrays of size one. This means that every time a large array is sorted, there are many recursive calls to merge sort that have small input sizes. In light of the figure above, that approach doesn’t make much sense: merge sort is not a competitive sorting algorithm on small inputs. It would make more sense to recursively break the input into smaller and smaller pieces until some threshold is reached, and then switch strategies to a sorting algorithm that is more efficient on those small inputs.

The following pseudocode describes this alternate approach:

merge_sort(sub-array)
    If sub-array has fewer than MERGE_SORT_THRESHOLD entries:
        Sort the sub-array with insertion sort. 
    Otherwise: 
        Recursively merge_sort the left half
        Recursively merge_sort the right half
        Merge the two sorted halves.

Choosing an appropriate value for MERGE_SORT_THRESHOLD requires some experimentation. One of the requirements of this assignment is that you select an appropriate threshold value and provide data to justify your choice. Simply selecting the crossover point based on the figure above is not a reasonable strategy. The point where the two lines cross represents the input size where the two sorts are equally fast. Switching strategies at that input size will result in no speedup at all.

The only way to determine an appropriate choice for MERGE_SORT_THRESHOLD is to systematically experiment with different values until you find the one that leads to the best overall sorting performance. You should create a driver method that iterates over different values of MERGE_SORT_THRESHOLD and uses SortProfiler to evaluate each. The following minimal driver class illustrates how SortProfiler objects can be used directly without invoking the command line application:

Finally, note that we have provided an implementation of insertion sort that works on a full array, but you will need a version that works on a sub-array for this part of the assignment.

Part 3 - Introspective Sort

As mentioned above, quicksort is usually very fast. There are relatively few pathological inputs that result in \(\Theta(n^2)\) performance. The idea behind introspective sort is to perform a standard quick sort until there is evidence that the current input is pathological. If it looks like the current input will cause \(\Theta(n^2)\) performance, the algorithm switches strategies to a sorting algorithm with guaranteed \(O(n \log n)\) performance. Typically, heap sort is used as the alternative, since it is in-place and takes \(\Theta(n \log n)\) time in the worst case. Since we haven’t yet studied heap sort, your version of introspective sort should use the improved merge sort from Part 2 as the alternate sorting algorithm. The resulting algorithm is almost as fast as quick sort for most inputs, but has a worst case performance of \(\Theta(n \log n)\).

It is possible to detect pathological inputs by tracking the recursion depth of the quicksort algorithm. When quicksort is working well, the partition operation typically moves the pivot item somewhere near the center of the current sub-sequence. When this is the case, the maximum recursion depth will be \(\Theta(\log n)\). Therefore, introspective sort switches strategies when the recursion depth exceeds \(2 \lfloor \log_2 n\rfloor\).

Note that the provided version of merge sort sorts a full array. Introspective sort needs to be able to call a version of merge sort that can sort a portion of a provided array. The MergeSortImproved class provides a declaration for such a method.

Part 4 - Experimental Analysis

So far in this course we have focused on asymptotic analysis. This usually gives us the information we want. When deciding between two algorithms we generally prefer the one with better asymptotic performance.

Our goals for this project are somewhat different. The focus now is on fine-tuning existing algorithms to improve their performance. This is a situation where we really care about the constant factors. (We would be very happy if we could write a version of merge sort that is twice as fast, even if the asymptotic worst-case performance were still \(\Theta(n \log n)\).)

In order to help you evaluate your sorting improvements we are providing a command-line tool that can be used to systematically evaluate the run-time performance of different sorting algorithms.

Running the tool with no arguments should produce the following help message:

Option (* = required)         Description                                   
---------------------         -----------                                   
* -s <Integer: NUMBER>        Starting (smallest) input size                
* -i <Integer: NUMBER>        Input size increment                          
* -m <Integer: NUMBER>        Maximum input size to test                    
* -t <Integer: NUMBER>        Number of trials for each input size          
-w [String: SORT1,SORT2,...]  Comma separated list of sorts. Options include
                                insertion, selection, merge, merge_half,    
                                merge_adaptive, quick, introspective and    
                                timsort. Default is to execute all sorts.   
-g [String: GENERATOR]        Sequence generator. Options include random,   
                                ordered or evil. The default is random      

For example, we could compare the performance of quicksort and introspective sort on small inputs by executing the tool as follows:

$ java SortProfiler -s 1000 -i 1000 -m 10000 -t 10 -w quick,introspective

The resulting output would look something like the following:

N,  quick,  introspective
1000,   0.00009277, 0.00000085
2000,   0.00021152, 0.00000086
3000,   0.00030850, 0.00000088
4000,   0.00043710, 0.00000090
5000,   0.00055941, 0.00000088
6000,   0.00068144, 0.00000087
7000,   0.00081457, 0.00000091
8000,   0.00095075, 0.00000087
9000,   0.00105827, 0.00000089
10000,  0.00117701, 0.00000087

(Introspective sort is very fast here because it hasn’t been implemented yet.)

For this part of the project you will prepare a short document that quantifies the performance improvements you were able to achieve. This document should take the form of the following figures:

Each figure must have clearly labeled axes, a caption, and appropriate legends. There must be a paragraph describing the results illustrated in each figure. The description must provide enough information for the reader to understand the point of the experiment and how the data supports the conclusion.

Submission

Submit the following five files through Gradescope:

Test your code locally at each stage of development, and reason about its correctness. When you are reasonably confident in your solution, submit your code to Gradescope to run the autograder’s unit tests. There will be a point deduction for excessive Gradescope submissions.

Grading

Gradescope Tests for Improved Merge 20 points
Gradescope Tests for Switching Strategies 20 points
Gradescope Tests for Introspective Sort 20 points
Experimental Results Document 25 points
Instructor Style Points 10 points
Gradescope Style Checks 5 points
Excessive Submission Deduction -5 points if there are more than 20 submissions, -1 for each additional submission beyond 20.

The grading of your evaluation document will be based on the quality of the figures as well as the clarity of your writing.


*The second approach has recently become popular. Both Python (starting in version 2.3) and Java (starting in version 7) use Timsort as their default sorting algorithm. Timsort is a modified version of merge sort that includes several enhancements: it uses a more space-efficient merge operation, it takes advantage of partially sorted arrays by finding and merging existing sorted regions, and it uses a non-recursive binary insertion sort to handle short unsorted regions.