The Sorting Problem, Part 2: Stable comparison sorts.

Note: ‘upto’ = inclusive.

1) Bubble sort:

start from one end of your input array of ‘n’ elements. Keep a pointer (logical or actual) called ‘i‘ at the index position. Start another pointer ‘j‘ from 1 every iteration, and go till the end of the array. If a[j-1]>a[j], swap the two.
Doing this, we start getting the elements in sorted order….from the opposite end of the array from which we started. With every iteration of i, one more element gets into sorted order at the end.

Hence, as  the last elements are sorted, we can stop j before them.

Algorithm:

for i=0 upto A.length-1, do:
    for j=1 upto A.length - i - 1, do: 
        if A[j]-1]>A[j] then: 
            swap (A[j-1], A[j])

Bubble sort can be modified with a ‘flag’ value in the if-statement, so that if no swaps happen in one traversal through the array, we exit the loops as the array is sorted. This reduces our best case time from Ω(n^2) to Ω(n). Our worst case-time is still O(n^2) though, when the array is reverse-sorted. It does not use any extra space.

Over the years, people have come out with various Bubble sort variations:

  • Cocktail-Shaker sort, a slightly more efficient version of Bubble, which sorts first at one end, and then the other. It runs in roughly half the time as bubble, but that is still O(n^2) and hence it is not really worth mentioning more about. It is unstable.
  • There also exists Comb sort, which is yet another bubble sort modification. It includes using bubble sort with different comb lengths, decreasing the comb length to one in the last case, where it is just ordinary bubble sort. This sort uses a property both insertion and bubble sort have: almost O(n) runtime for almost-sorted data. However, comb sort is also unstable.
  • Odd-even sort is yet another bubble-sort derivative. No one really cares about it, either.

 

2) Insertion sort:

start i from the left end of your input array, from index 1. Hold the value of your A[i] in a temporary variable, ‘key’. Keep moving elements one place in front (overwrite A[i] if necessary, as we have its value in ‘key’) until you find the proper place to put your value ‘key’. Select your next A[i] as your new ‘key’ and repeat until sorted.

File:Insertion-sort-example.gif

Many people get insertion sort wrong; we don’t select each element and put it at the end…that’s selection sort.

Algorithm:

for i = 1 to length(A)
    key = A[i]
    j = i
    while j > 0 and A[j-1] > key
        A[j] = A[j-1]
        j = j - 1
    A[j] = key

The time complexity of insertion sort, like bubble sort, is O(n^2) for when the array is reverse-sorted, and Ω(n) for when it is sorted. Average case is Θ(n^2).

 

Insertion sort variations (Note: some are unstable):

  • Shell sort:
    Despite being Θ(n^2), insertion sort is usually actually very fast on almost-sorted data. It is also very fast on very small amounts of data: it will beat O(nlogn) sorts for arrays of less than 10 elements.
    This unique property of insertion sort gave rise to Shell sort, which uses insertion sort multiple times, on different sets of elements. The first time, it considers the array made up of every ‘a’ th element, then sorts it accordingly. Then it selects every ‘b’th element (where b<a) then sorts it. Then it selects every ‘c’the element (where c<b<a) and sorts it. etc.
    a, b, c, d…. are called the ‘spans’, and are usually sorted in an array of their own and how they are selected is a subject on its own. In the last iteration of shell sort, span=1 and it is basically insertion sort. The data is almost sorted, and insertion sort runs very fast on it.
    Unfortunately, Shell sort is unstable (i.e. it sorts, but it may change the relative positions of keys of equal values), and it is really not very useful because it depends on how we select our spans. Most logically, we should select them in such a way that our largest span gets us subarrays of <10 elements each. But like I said, it’s complicated.
    Likewise, calculating Shell sort’s complexity is complicated, but one thing is for sure: its best case is O(nlogn) as it is a comparison sort.

Shell sort, which has nothing to do with the UNIX shell.

 

  • Gnome sort, is a mixture of insertion and bubble made in 2000. It starts off as insertion sort, but instead of shifting elements forward like insertion, it instead swaps like bubble. It is again, O(n^2). Gnoma sort is stable because it changes so little in the way insertion sort works.
  • Library sort, is an exact replica of insertion sort with spaces left in the array, like a librarian leaves spaces in rows of books. It is supposed to be Θ(n.logn), but it is not specified how to leave the spaces. It is hence unrealistic.

 

3) Merge sort:

This is one of the simplest O(n.logn) sorting algorithms. It runs like this:

Merge-sort-example-300px.gif

 If merging is wrong I don’t wanna be right.

The code for merge sort can be found on these links:

I did not include it as it is quite large. Merge sort can be easily implemented iteratively, but, by design, it is recursive, as it follows the divide-and-conquer procedure. The iterative version is easier to implement, in my opinion.

It has two basic procedures:

  1. Divide: select your size of each of the two array to merge, from 1 to 2 to 4 to 8 etc….double your sizes of arrays to merge every iteration.
  2. Merge/combine: put the two sorted arrays in order to get a larger sorted array. Note: an array of size 1 is ‘sorted’.

Pretty simple, right?

One of the problems is figuring out why it is an O(nlogn) sorting alogirhtm. That requires a picture:

This a really shitty picture I could find, because it is really confusing for someone who has never seen Merge sort. Unfortunately, it is also the most accurate.

Think about it this way: you have 8 elements. What’s log(8) base 2? its 3. That’s the depth of your merging. Dividing takes really no time, just pass i + size as the length of your array to your merging function. Hence dividing is O(1).

Merging depends on the size. In your last case, it has to merge 4 elements with 4 elements, which is the most it ever will have to, so merging is O(n) exactly. For arrays of size less than n/2, say arrays of size 2, you will have to Merge each pair of subarrays, each of length ‘size’, from the start till the end of the array. This is again, O(n) exactly. So, O(n) x O(log n) = O(n.log n). You need an auxiliary array of size ‘n’ to put the elements in or else you will overwrite elements and hence lose their value. The data is copied back to the main array every time we double the size variable. (Note: alternately, we could just use the main array as the auxiliary array the next time).

Merge sort will work most cleanly when n is an exact multiple of 2, however it will work on every ‘n’…there will just be some shortage in the last subarray.

Time complexity:

Merge sort runs like hell. It is faster than Quicksort on average, and much faster than Heapsort. It is very fast. And it is also stable; with a simple modification via an if-statement, our merging function can maintain the relative order of elements with equal key values.

The real problem of merge sort is that in most cases is the auxiliary memory, which is O(n)….is unacceptable in most cases, especially when dealing with databases with hundreds of gigabytes of data or other memory-limited systems.

There is one exception, though: merge sort on a linked list. This will take O(1) auxiliary space if you enter your data into a linked list. And it will still run in O(n.log n).

Merge sort variations:

  • Timsort, which was developed in 2002 by Tim Petres for the Python programming language. It is now the standard sorting algorithm in Python and used in both Java and Android.  It is a combination of insertion sort and merge sort, and runs in O(n.logn) in the worst-case, and O(n) in the best case. Merge sort will run in O(n.logn) in the best case. Timsort is stable, unlike Shell sort.
  • There is also another variety of insertion-sort and Merge-sort, called Blocksort. It runs in-place in O(n.logn) time with Ω(n) as best case. It is a pretty good optimization of Merge sort for O(1) space, however, it does requite rotations of blocks of data. It is also stable.

 


Sorting linked lists:

Linked lists, as you probable know, are data structures where every Node is an object/structure that stores a bunch of data. One of the member variable of each node is a pointer to the next Node in the sequence. Sometimes there is also a pointer to the previous node, in which case it is called a doubly linked list.

If in a singly linked list, the last node points to the first, it is called a circular singly liked list. If in a doubly linked list the last node and first node point to each other, it is called a circular doubly linked list.

With linked lists, we allocate each node dynamically and not all at once like in an array. However, like arrays, linked lists are linear data structures. The main difference between linked lists and arrays is that in linked lists, access time for any element at random is O(n), whereas in an array it is O(1); in fact, it is exactly one step: A[x].

This is because, in the computer’s memory, creating an array allocates memory in a sequential order, whereas in a linked list, it is necessary to create every node in a separate location in memory, which can only be accessed via the pointer to that node….which is the pointer of the previous node in our linked list (and also the next node in a doubly linked list). We start accessing the nodes via a pointer to the start of the list, called the ‘list’ or ‘head’ pointer. From there, we must traverse the list to reach any other node in the list. Thus,

An array has random access and a linked list has linear access.

If we lost the pointers to that node, we can’t every find that node again in memory. This is inconvenient compared to arrays, where we only need the first element’s position in memory, and then all positions in memory are considered relative to it (<-this is what A[x] means. We access the ‘x’th element relative to A[0]).

However, linked lists do trump arrays in one respect: allocation time. In a linked list, if we make a new node, we can insert it basically anywhere we want into a linked list. Array, meanwhile, are immovable without copying. Inserting a new element in one place in an array requires creating a brand new array with n+1 elements, and copying all the into that. This is because:

An array has bulk allocation whereas a linked list has discrete allocation.

Arrays and linked lists are, therefore, sort of like the Yin and Yang of data structures.

Yin and Yang stands for two forces which seem opposite,

but which are actually interconnected and complementary.

 

One of the really cool things about insertion sort and bubble sort: you can run them both on linked lists. In fact, insertion in sorted order into a linked list is basically insertion sort. You just have to modify your algorithm a bit. You can also do this with merge sort.

Here is a simple insertion sort on a linked list by insertion sort. It uses the standard algorithm INSERT-SORTED, which inserts a Node ‘z’ into its sorted position into a presorted linked list. This will sort in O(n^2) time because, if you see, it exactly mirrors insertion sort on an array.

SORT-LINKED-LIST(List)
    if (List.head->next==NULL)   
        return 
    i=List.head
    while(i->next != NULL)
        j=i->next
        temp=i->next->next 
        i->next=NULL 
        /*we have now detached our linked list into two smaller 
lists, one starting from temp, and one ending at i. We do this 
because in insertion sort on an array, we go backwards from the 
ith element to the first element and 'insert' into the correct 
position. 
Here, we go forward from the first element to the ith element, 
and insert into the correct place. The two algorithms are 
virtually the same; the element might be inserted anywhere between
the first and ith element for both. */
        INSERT-SORTED(j, List) //calls the function
        if(i->next==j)
            i=i->next
        i->next=temp //reattach the linked lists
INSERT-SORTED (z, List) 
    if List.head==NULL
        List.head=z
        return
    p=List.head
    prev=NULL
    while(p!=NULL)
        if(z->data <= p->data)
            if(p==list)
                z->next=List.head
                List.head=z
 
            else
                z->next=prev->next
                prev->next=z
 
             return
        prev=p
        p=p->next
 
    prev->next = z

For bubble and insertion sort, the efficiency doesn’t change by using a linked list.

It is just the same as running on an array. They don’t need extra space and will run in O(n^2) time. They take roughly the same amount of actual time, too.

The same cannot be said about Merge sort…….Merge sort actually gets more efficient when you run it on a linked list. Merge sort’s biggest limitation was its auxiliary space. With a linekd list, rearranging elements was easy. Granted, you will need a few pointers, but it does not need the massive O(n) auxiliary space that was needed for sorting an array! Yay! 😀 Proving this, however is left as an “exercise to the user” 😛 (Note: hint).

Quicksort and Heapsort, too, can be run on linked lists, but the truth is, it is wonky and shoddy and weird to do so. These are unstable algorithms, which should be a big pointer to the fact that they don’t access data linearly. Their efficiency depends on random access of an array. In a linked list, access of any element is O(n). Looking at Quicksort, this isn’t really too much of a problem, and on a linked list can and has been done… but, it adds a lot of overhead to the constant. And Quicksort’s low constant for average-time complexity, is the only real reason we ever use it in the first place. As for Heapsort, using a linked list would really be unacceptable in the ‘heapify’ stage, and puts a big dent into its already bloated efficiency, perhaps even pushing it to O(n^2) or beyond.

So, why is it that stable sorting algorithms work really well with linked lists?

This is conjecture, but I really think it is because in a stable sort, you compare one element with every other unsorted one. Insertion and bubble do this very obviously. Merge does it less obviously, but it does this, while merging. Stable sorts compare two objects with the same key value directly, and then allocates them into their relative positions. This is best done by going from the start of the array, to the end, i.e. linearly accessing your array. As a linked list has linear access, it is hence a good match for stable sorting algorithms.

 

Welp, that’s all for now, folks!

I’m going to continue this post in “The Sorting Problem, Part 3: unstable Comparison sorting”, where I will deal with Quicksort, Heapsort, and why they were invented in the first place.

Also stay tuned for “The Sorting Problem, Part 4: Why Not Radix?” where I deal with the strange, giggling non-comparative sorts. So, make sure you keep your thumbs pointed to the stars, and your browsers open!

About the author:

Abhishek Divekar,

as a kid, really wanted a dog. But his parents bought him a computer instead. And so he decided to trade one for the other, and has loved computers unconditionally ever since. They, meanwhile, have barely just tolerated him. At least, he reasons, they would never eat his clothes or spill fluids all over his bed. Except for that one time, with the Dell.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s