Dynamic Programming [1]: Little steps

Let’s start this post with a perfectly reasonable assumption:

You are a rabbit farmer, and on your rabbit farm, you have rabbits.

That’s good. That’s a good direction. Yeah.

Rabbits, as you know, multiply. You started with two, but after half a year, you have somewhere north of thirty  of the furry little munchkins. Being a diligent farmer, you must calculate how many rabbits you will have after a few more months (in order to stock up on things such as rabbit feed, poop-scoops and orders slips from mystic tribesmen).

You started with two, but you noticed a pattern: after every month, the number of new rabbits you have is the total number of rabbits you had, two months ago. In other words, your new total number of rabbits is the sum of the last two months’ totals: 2, 3, 5, 8, 13, 21, 34, 55 … and if you know anything about maths, you’ll see where this thread of thought is going.

My first experience with the Fibonacci sequence occurred in a wonderful little book series named Murderous Maths, a few of which I still own today and cannot recommend highly enough (except by saying that it tickled my grey matter). In the book (I forget which one) a bunch of British characters all find themselves in different British situations, all of which revolve around the number sequence 0, 1, 1, 2, 3, 5, 8, 13, 21 etc. One of them, Rodney, is actually farming rabbits. I thought it was a hoot at the time; but, as I found out while researching this article, the author of Murderous Maths had taken his cues from history: Fibonacci actually owned a rabbit farm himself, which led to his now famous series.

It also led to the very famous golden ratio:
\varphi = \frac{1 + \sqrt{5)){2} \approx 1.61803\,39887\cdots\,

If you divide any two consecutive Fibby numbers,

you should approximately get this.

All of this is mostly immaterial, except that the Fibonacci sequence seems to be the most basic introduction to dynamic programming that anyone can think of.

Dynamic programming is (for want of better words) everywhere. Seriously. Go just about anywhere on the web and they’ll tell you how it relates to the Fibonacci sequence [basically, you should be smart enough to know that: to calculate the next month’s number of rabbits as time progresses, you don’t need to start from scratch every time; you can re-use your old results].

I don’t think that beating that horse will do any good. So I’m going to do a slightly harder problem, and one that is not as immediately intuitive: calculating Binomial coefficients.

The Binomial theorum (which I’m sure you recall in perfect clarity), is used for many things: one of which is to find the coefficients of the terms, in the expansion of (x+y)^n. The r’th coefficient is given by:

This is also the number of ways in which you can select ‘r’ items out of ‘n’ items; eg: if you have 10 balls labelled A, B, C, D…, in how many ways can you select any 3 balls? The answer is 10C3 = 10!/3!7! = 120 ways (more than you’d expect).

If you’re particularly mathsy, you might even know that these oefficients are the terms of Pascal’s triangle, with n=1,2,3… being the distances from the top of the triangle.

Pascal’s triangle: square version and party version.

Take a look at the second figure, and you will find it tells you many useful things: the third column goes as 1,3,6,10,15….which are (1), (1+2), (1+2+3), (1+2+3+4), (1+2+3+4+5) etc. The fourth column? Tetrahedral numbers. The fifth row? Treat the numbers as a single number, and it is (11)^5. The same result holds for all the rows. The sum of the numbers of the n’th row, is 2^n. The diagonals, if you notice, are the same as the columns. There’s a lot to be learned from this triangle, and calculating it efficiently is an important job.

A job for….Dynamic Programming.

(I remember, the coffee was fantastic)

Dynamic Programming involves using old results. Take a look at the second pascal’s triangle again. You can calculate one of the lower rows from the row directly above it: every j’th element in the lower row is the sum of the j’th and (j-1)’th elements, for 0 < j < n.

So, you can use dynamic programming to calculate all the binomial coefficients, because calculating n!/(n-r)!r! can easily go out of bounds for data-type ranges (eg: 100! = 9.332622E+157).

I’m sure it must be obvious what to do;

Binom_coeff_tab(i, j):     //tab for tabulation

int pascals_tri [n][n]   //assume these are all zeroed-out

pascals_tri[0][0]=1

for (i=1; i < n; i++) //rows

for (j=0; j <= i; j++) //columns

if (j==0 || j==i) pascals_tri[i][j] =1

else pascals_tri[i][j] = pascals_tri[i-1][j-1] + pascals_tri[i-1][j]

return pascals_tri[i][j]    //iCj, the coefficient we want

QED, you say. And it is. But the point of this was to learn Dynamic Programming, so you’re going to have to learn how to use recursion. You may remember recursion; that odd beast which enchants you with it’s utter simplicity. You fall in love with its ability to turn the most complicated problems into ten lines of code…until you try to make one on your own. Then, you hate it with a vengeance, as all humans hate what they do not understand.

Monologue aside, recursion has a big part to play in DP; all DP-related subproblems are defined most succinctly in recursive form. But Haskell-lovers should put down their glasses of champagne. It turns out that though almost all DP problems are defined recursively, they are often not implemented that way. In the broad sense, there are two ways of going about Dynamic Programming problems:

  1. Memoization:

    This is a purely recursive, top-down approach. It’s also called caching, and, in general, it uses less space than the next method. In memoization, whenever we need solution to a subproblem, we first look into the lookup table (I say table, but really only a few values are filled). If the precomputed value is there then we return that value, otherwise we calculate the value and put the result in lookup table so that it can be reused later. Memoization calculates the values on-demand. This means that computationally, it should be faster, as you only calculate the values you need. However, there may be overheads and the code is more complex.

  2. Tabulation:

    this is a very straightforward, bottom-up approach. You starts from the base case, and iteratively build an n-dimensional table of all the values computed so far. It computes values for all the sub-problems. Based on the results in the table, the solution to the “top” / original problem is then computed. It is usually faster than memoization because of the smaller overhead. It’s more predictable, too. But you do have to calculate a lot of unnecessary values.

(source for the above)

So, which do I use?

  • If all subproblems must be solved at least once, a bottom-up (i.e. tabulation) dynamic-programming algorithm usually outperforms a top-down memoized algorithm by a constant factor.

    • No stack overhead for recursion and less programming overhead for maintaining the table.
    • There are some problems for which the regular pattern of table accesses in the dynamic-programming algorithm can be exploited to reduce the time or space requirements even further.
  • If some subproblems in the subproblem space need not be solved at all, the memoized solution has the advantage of solving only those subproblems that are definitely required.
    Memoization can have tables too (as I’m about to show you). But Memoization works in a top-down fashion, rather than bottom-up.

The code we saw of the Binomial coefficients was a tabulation solution. Let’s try Memoization now.

In DP, we define the problem, as a recurrence relation between sub-problems. What is our problem? Get the binomial coefficeint C(i , j), i.e. the coefficient of the j’th term of the i’th row of pascal’s triangle. This happens to be iCj, by the way, i.e. i!/(i-j)!j!

We denoted C(i, j) as pascals_tri[i][j] in the code above. We noticed that pascals_tri[i][j] = pascals_tri[i-1][j-1] + pascals_tri[i-1][j]; let’s do the same with our binomial coefficient, and say:

C(i,j) = C(i-1, j-1) + C(i-1, j)     if 0 < j < i

   = 1                                      if j=0 or j=i

This is our recurrence relation (the problem in terms of the subproblems). Now, if asked to calculate iCj, we run the following recursive program.

Binom_coeff_rec(i, j):

if (j==0 or j==i)

return 1

else return Binom_coeff_rec(i-1,j-1) + Binom_coeff_rec(i-1,j)

This, works. But it’s extremely slow, as the screenshot I’ve provided demonstrates.

Dynamic programming: tabulation vs. vanilla recursion.

As inputs, this program takes a number N, and uses it as both i and j: i becomes N, and j becomes [N/2], which is the largest binomial coefficient in that row (this is just to demonstrate how it works).

As N becomes bigger, the simple recursion becomes VERY slow, while tabulation performs fairly well. This is because the simple recursive program runs in exponential time i.e. O(2^n), whereas the tabulation runs in O(n^2).

To put that in perspective, 30^2 is 900, whereas 2^30 is ~1 billion.

(Not gonna say it, not gonna say it….)

Note: the ratio of the two algorithms’ time complexity is not represented accurately in the time they take, because of the Python environment (and other things like caching etc); but comparing the times taken for different values of N, you can tell that the algorithms scale exponentially and quadratically, respectively.

So, simple recursion, while top-down, is quite terrible. That’s because we haven’t memoized our results, and we keep re-calculating them. Let’s try this again, but with the following memoization code:

Binom_coeff_mem(table, i, j):

if (j==0 or j==i)

table[i][j]=1

else

if (table[i-1][j-1]==0)    table[i-1][j-1]=Binom_coeff_mem(table, i-1, j-1)

if (table[i-1][j]==0)    table[i-1][j]=Binom_coeff_mem(table, i-1, j)

table[i][j]=table[i-1][j-1] + table[i-1][j]

return table[i][j]

Binom_coeff_mem_driver(i, j): //driver program for the above recursive function

int table=[i][i]   //assume all are zeroed-out

return Binom_coeff_mem(table, i, j)

I ran this program, and, as expected, it was much, much faster than the simple recursive one. In fact, it is expected to run faster than the tabulation (iterative) one, because it does not need to calculate all the values!

To give you some intuition about why this happens, I added a line of code in the program that prints the entire table after each stage:

Dynamic Programming: tabulation vs memoization
As you can see, the tabulation algorithm has to calculate ALL the values of ALL the rows, before getting to the one it wants. The upside is that the code is very simple and quite fast. But the Memoized code is even faster, because it only has to calculate the values it wants, i.e. the ones along the diagonal.
I know, the picture says that the memoized code is actually slower than the tabulated one…by a lot. This is because I’ve taken fairly small numbers, and hence the overhead of calling the function recursively is taking its toll. However, I tried it again with larger numbers, and the result was as expected: memoization beat tabulation, (though not by much).
The numbers get massive….imagine doing this by factorials!

Properties of Dynamic Programming:

Something you might have been wondering: why doesn’t the recursive method (or even the memoized one) blow out the computer’s stack? You’d expect it to, since for N=30, it does 2^30 = ~1 billion recursive function calls. The truth is that, while the number of function calls may be a LOT, they’re shallow: the maximum depth they can attain is N, because that’s the ‘depth’ of the table. Also, in most function calls, it’s not even going deeper than one level, because we’ve already memoized the result, so the function call is just retrieving it.
This thought leads to an important feature of DP-problems:
  • Overlapping sub-problems:

    Dynamic Programming problems are all recursive in nature, as they all develop from some recurrence relation. However, not all recursive problems can use tabulation/memoization: divide-and-conquer algorithms such as quicksort and mergesort do not fall under the category of DP because, while running, the algorithm changes the subproblem with each iteration. When you go back to the same array indices the next time, the elements aren’t the same, and so the problem is not the same, and so the memoized solution to the old problem is of no use. In other words, the subproblems do not overlap.
    A good reason to use Dynamic Programming is if your problem space is small; meaning, you keep having to calculate the same answers again and again, and you can save time by storing the answers.
    Looking at the recursion tree of a problem, you can instantly tell if it has overlapping subproblems, because the leaves keep spreading to the same values. In the pruned recursion tree (obtained by using Dynamic Programming), you can see just how stark the difference is.

  • Optimal substructure:

    If the optimal solution to a problem can be constructed from the optimal solutions of its subproblems (i.e. subsets of the problem), then it’s a strong hint that Dynamic Programming should be used.
    Optimal substructure is what we’re aiming for when generating the recurrence relation: we want one that solves the problem optimally by using the optimum of the smaller problems.

Source for the above: CLRS


As an example of the above two paradigms, consider the Rod-Cutting problem:

Rod-Cutting:

You are supposed to cut a metal rod into smaller rods. Provided is a reference table, which tells you the price of a rod piece of length ‘X’. eg: a rod of length 1 inch is worth $1, a rod of length 2 inches is worth $1.5, 3 inches might be $4.5, and so on.
Given a rod of finite length L, and assuming that every cut is free, how do you divide the rod so as to maximize your profit?

The total number of possible ways to cut the rod is exponential, because you must sum up all the possible combinations of cuts, which becomes 2^N.

As we want to transform exponential running time into polynomial, the obvious thing to do is use Dynamic Programming.

Let’s review the problem iteratively:

  • The base case: if you have a rod of length zero, your maximum profit is zero.
  • Check out the smallest subproblem possible: you have a rod of one inch. There’s only one way to ‘cut’ it; into a rod of one inch, with zero cuts (cuts are free anyways).
  • If you have a rod of two inches, you have two choices: cut it into two pieces (1”, 1”), or leave it as one piece (2”). Depending on the reference table, you choose the alternative that maximizes the profit you get from that.
  • Now, suppose you have a 3-inch rod (yes, okay, I heard it). There are multiple ways to cut it: you keep it as three pieces of lengths (1”, 1”, 1”), or two peices (1”, 2”) or one piece (3”). If you notice carefully, this overlaps with the previous problem; your choices are: keep it as 3”, or reuse your previously calculated value of the best possible cutting of a rod of length 2” – i.e. either (1”, 1”) or (2”) – and add the  price of an extra 1” piece, or keep it as a 2” rod piece + 1” piece (the ‘most optimum’ of N=1). Again, select the maximum of these choices as “the most optimum way of cutting a rod of length 3 inches”.
  • Go onto four inches, and the scenario changes, and the algorithm starts to emerge.
    • You can either keep it as 4”
    • You could set aside a 1” piece, and re-use the best way to cut a 3” rod.
    • You could set aside a 2” piece, and re-use the best way to cut a 2” rod.
    • You could set aside a 3” piece, and “re-use” the best way to cur a 1” rod (i.e. your base case)
      Thus, you find yourself with four options, and choose the best one.
  • 5” : You can see the alogithm in play:
    max( cost(5”),  cost(1”)+bestCost(4”),  cost(2”)+bestCost(3”),  cost(3”) + bestCost(2”), cost(4”) + bestCost(1”) )

This is all a bit weird-looking, so we’ll generalize it as a recurrence relation:

Rod-cutting:

bestCost(i) = max( cost(j) + bestCost( i – j ) ),    for 1 <= j <= i

where bestCost( 0 ) = 0 is the base case.

The code for Rod-Cutting would be as follows:

bestCost_driver( ):

n=8

int  saved_bestCosts[n]   //assume zeroed-out

price_array= { 1, 5, 8, 9, 10, 17, 17, 20 }

return  bestCost(price_array , saved_bestCosts , n)   // output: 22

bestCost (price_array,  saved_bestCosts,  n):

if (n==0)    return 0

 if (saved_bestCosts[n-1] != 0)

return saved_bestCosts[n-1]    // the most optimal way to cut a rod of length N

else   //haven’t saved, must calculate

max_val= price_array[n-1]    //get the price of the n’th length (this might be the optimal)

for (i=0; i < n; i++)

max_val = max(max_val , price_array[i] + bestCost(price_array, saved_bestCosts, n-i-1))

 saved_bestCosts[n-1] = max_val  //save the result

return max_val


Problems where Dynamic Programming is used:

So there you have it: a small intro to Dynamic Programming.

Since DP covers such a wide range of problems, I’m not going to be doing them systematically; I’ll take a few related problems in separate posts.

About the author:

Abhishek Divekar,

is a big fan of English rock band the Arctic Monkeys, even though he does not understand any of their music videos. He’s also offering 2,000 Rupiah to anyone who can explain why Humbug is their favorite album of all time (to cover the cost of a get well soon card).

Advertisements

The Sorting Problem, Part 2: Stable comparison sorts.

Note: ‘upto’ = inclusive.

1) Bubble sort:

start from one end of your input array of ‘n’ elements. Keep a pointer (logical or actual) called ‘i‘ at the index position. Start another pointer ‘j‘ from 1 every iteration, and go till the end of the array. If a[j-1]>a[j], swap the two.
Doing this, we start getting the elements in sorted order….from the opposite end of the array from which we started. With every iteration of i, one more element gets into sorted order at the end.

Hence, as  the last elements are sorted, we can stop j before them.

Algorithm:

for i=0 upto A.length-1, do:
    for j=1 upto A.length - i - 1, do: 
        if A[j]-1]>A[j] then: 
            swap (A[j-1], A[j])

Bubble sort can be modified with a ‘flag’ value in the if-statement, so that if no swaps happen in one traversal through the array, we exit the loops as the array is sorted. This reduces our best case time from Ω(n^2) to Ω(n). Our worst case-time is still O(n^2) though, when the array is reverse-sorted. It does not use any extra space.

Over the years, people have come out with various Bubble sort variations:

  • Cocktail-Shaker sort, a slightly more efficient version of Bubble, which sorts first at one end, and then the other. It runs in roughly half the time as bubble, but that is still O(n^2) and hence it is not really worth mentioning more about. It is unstable.
  • There also exists Comb sort, which is yet another bubble sort modification. It includes using bubble sort with different comb lengths, decreasing the comb length to one in the last case, where it is just ordinary bubble sort. This sort uses a property both insertion and bubble sort have: almost O(n) runtime for almost-sorted data. However, comb sort is also unstable.
  • Odd-even sort is yet another bubble-sort derivative. No one really cares about it, either.

 

2) Insertion sort:

start i from the left end of your input array, from index 1. Hold the value of your A[i] in a temporary variable, ‘key’. Keep moving elements one place in front (overwrite A[i] if necessary, as we have its value in ‘key’) until you find the proper place to put your value ‘key’. Select your next A[i] as your new ‘key’ and repeat until sorted.

File:Insertion-sort-example.gif

Many people get insertion sort wrong; we don’t select each element and put it at the end…that’s selection sort.

Algorithm:

for i = 1 to length(A)
    key = A[i]
    j = i
    while j > 0 and A[j-1] > key
        A[j] = A[j-1]
        j = j - 1
    A[j] = key

The time complexity of insertion sort, like bubble sort, is O(n^2) for when the array is reverse-sorted, and Ω(n) for when it is sorted. Average case is Θ(n^2).

 

Insertion sort variations (Note: some are unstable):

  • Shell sort:
    Despite being Θ(n^2), insertion sort is usually actually very fast on almost-sorted data. It is also very fast on very small amounts of data: it will beat O(nlogn) sorts for arrays of less than 10 elements.
    This unique property of insertion sort gave rise to Shell sort, which uses insertion sort multiple times, on different sets of elements. The first time, it considers the array made up of every ‘a’ th element, then sorts it accordingly. Then it selects every ‘b’th element (where b<a) then sorts it. Then it selects every ‘c’the element (where c<b<a) and sorts it. etc.
    a, b, c, d…. are called the ‘spans’, and are usually sorted in an array of their own and how they are selected is a subject on its own. In the last iteration of shell sort, span=1 and it is basically insertion sort. The data is almost sorted, and insertion sort runs very fast on it.
    Unfortunately, Shell sort is unstable (i.e. it sorts, but it may change the relative positions of keys of equal values), and it is really not very useful because it depends on how we select our spans. Most logically, we should select them in such a way that our largest span gets us subarrays of <10 elements each. But like I said, it’s complicated.
    Likewise, calculating Shell sort’s complexity is complicated, but one thing is for sure: its best case is O(nlogn) as it is a comparison sort.

Shell sort, which has nothing to do with the UNIX shell.

 

  • Gnome sort, is a mixture of insertion and bubble made in 2000. It starts off as insertion sort, but instead of shifting elements forward like insertion, it instead swaps like bubble. It is again, O(n^2). Gnoma sort is stable because it changes so little in the way insertion sort works.
  • Library sort, is an exact replica of insertion sort with spaces left in the array, like a librarian leaves spaces in rows of books. It is supposed to be Θ(n.logn), but it is not specified how to leave the spaces. It is hence unrealistic.

 

3) Merge sort:

This is one of the simplest O(n.logn) sorting algorithms. It runs like this:

Merge-sort-example-300px.gif

 If merging is wrong I don’t wanna be right.

The code for merge sort can be found on these links:

I did not include it as it is quite large. Merge sort can be easily implemented iteratively, but, by design, it is recursive, as it follows the divide-and-conquer procedure. The iterative version is easier to implement, in my opinion.

It has two basic procedures:

  1. Divide: select your size of each of the two array to merge, from 1 to 2 to 4 to 8 etc….double your sizes of arrays to merge every iteration.
  2. Merge/combine: put the two sorted arrays in order to get a larger sorted array. Note: an array of size 1 is ‘sorted’.

Pretty simple, right?

One of the problems is figuring out why it is an O(nlogn) sorting alogirhtm. That requires a picture:

This a really shitty picture I could find, because it is really confusing for someone who has never seen Merge sort. Unfortunately, it is also the most accurate.

Think about it this way: you have 8 elements. What’s log(8) base 2? its 3. That’s the depth of your merging. Dividing takes really no time, just pass i + size as the length of your array to your merging function. Hence dividing is O(1).

Merging depends on the size. In your last case, it has to merge 4 elements with 4 elements, which is the most it ever will have to, so merging is O(n) exactly. For arrays of size less than n/2, say arrays of size 2, you will have to Merge each pair of subarrays, each of length ‘size’, from the start till the end of the array. This is again, O(n) exactly. So, O(n) x O(log n) = O(n.log n). You need an auxiliary array of size ‘n’ to put the elements in or else you will overwrite elements and hence lose their value. The data is copied back to the main array every time we double the size variable. (Note: alternately, we could just use the main array as the auxiliary array the next time).

Merge sort will work most cleanly when n is an exact multiple of 2, however it will work on every ‘n’…there will just be some shortage in the last subarray.

Time complexity:

Merge sort runs like hell. It is faster than Quicksort on average, and much faster than Heapsort. It is very fast. And it is also stable; with a simple modification via an if-statement, our merging function can maintain the relative order of elements with equal key values.

The real problem of merge sort is that in most cases is the auxiliary memory, which is O(n)….is unacceptable in most cases, especially when dealing with databases with hundreds of gigabytes of data or other memory-limited systems.

There is one exception, though: merge sort on a linked list. This will take O(1) auxiliary space if you enter your data into a linked list. And it will still run in O(n.log n).

Merge sort variations:

  • Timsort, which was developed in 2002 by Tim Petres for the Python programming language. It is now the standard sorting algorithm in Python and used in both Java and Android.  It is a combination of insertion sort and merge sort, and runs in O(n.logn) in the worst-case, and O(n) in the best case. Merge sort will run in O(n.logn) in the best case. Timsort is stable, unlike Shell sort.
  • There is also another variety of insertion-sort and Merge-sort, called Blocksort. It runs in-place in O(n.logn) time with Ω(n) as best case. It is a pretty good optimization of Merge sort for O(1) space, however, it does requite rotations of blocks of data. It is also stable.

 


Sorting linked lists:

Linked lists, as you probable know, are data structures where every Node is an object/structure that stores a bunch of data. One of the member variable of each node is a pointer to the next Node in the sequence. Sometimes there is also a pointer to the previous node, in which case it is called a doubly linked list.

If in a singly linked list, the last node points to the first, it is called a circular singly liked list. If in a doubly linked list the last node and first node point to each other, it is called a circular doubly linked list.

With linked lists, we allocate each node dynamically and not all at once like in an array. However, like arrays, linked lists are linear data structures. The main difference between linked lists and arrays is that in linked lists, access time for any element at random is O(n), whereas in an array it is O(1); in fact, it is exactly one step: A[x].

This is because, in the computer’s memory, creating an array allocates memory in a sequential order, whereas in a linked list, it is necessary to create every node in a separate location in memory, which can only be accessed via the pointer to that node….which is the pointer of the previous node in our linked list (and also the next node in a doubly linked list). We start accessing the nodes via a pointer to the start of the list, called the ‘list’ or ‘head’ pointer. From there, we must traverse the list to reach any other node in the list. Thus,

An array has random access and a linked list has linear access.

If we lost the pointers to that node, we can’t every find that node again in memory. This is inconvenient compared to arrays, where we only need the first element’s position in memory, and then all positions in memory are considered relative to it (<-this is what A[x] means. We access the ‘x’th element relative to A[0]).

However, linked lists do trump arrays in one respect: allocation time. In a linked list, if we make a new node, we can insert it basically anywhere we want into a linked list. Array, meanwhile, are immovable without copying. Inserting a new element in one place in an array requires creating a brand new array with n+1 elements, and copying all the into that. This is because:

An array has bulk allocation whereas a linked list has discrete allocation.

Arrays and linked lists are, therefore, sort of like the Yin and Yang of data structures.

Yin and Yang stands for two forces which seem opposite,

but which are actually interconnected and complementary.

 

One of the really cool things about insertion sort and bubble sort: you can run them both on linked lists. In fact, insertion in sorted order into a linked list is basically insertion sort. You just have to modify your algorithm a bit. You can also do this with merge sort.

Here is a simple insertion sort on a linked list by insertion sort. It uses the standard algorithm INSERT-SORTED, which inserts a Node ‘z’ into its sorted position into a presorted linked list. This will sort in O(n^2) time because, if you see, it exactly mirrors insertion sort on an array.

SORT-LINKED-LIST(List)
    if (List.head->next==NULL)   
        return 
    i=List.head
    while(i->next != NULL)
        j=i->next
        temp=i->next->next 
        i->next=NULL 
        /*we have now detached our linked list into two smaller 
lists, one starting from temp, and one ending at i. We do this 
because in insertion sort on an array, we go backwards from the 
ith element to the first element and 'insert' into the correct 
position. 
Here, we go forward from the first element to the ith element, 
and insert into the correct place. The two algorithms are 
virtually the same; the element might be inserted anywhere between
the first and ith element for both. */
        INSERT-SORTED(j, List) //calls the function
        if(i->next==j)
            i=i->next
        i->next=temp //reattach the linked lists
INSERT-SORTED (z, List) 
    if List.head==NULL
        List.head=z
        return
    p=List.head
    prev=NULL
    while(p!=NULL)
        if(z->data <= p->data)
            if(p==list)
                z->next=List.head
                List.head=z
 
            else
                z->next=prev->next
                prev->next=z
 
             return
        prev=p
        p=p->next
 
    prev->next = z

For bubble and insertion sort, the efficiency doesn’t change by using a linked list.

It is just the same as running on an array. They don’t need extra space and will run in O(n^2) time. They take roughly the same amount of actual time, too.

The same cannot be said about Merge sort…….Merge sort actually gets more efficient when you run it on a linked list. Merge sort’s biggest limitation was its auxiliary space. With a linekd list, rearranging elements was easy. Granted, you will need a few pointers, but it does not need the massive O(n) auxiliary space that was needed for sorting an array! Yay! 😀 Proving this, however is left as an “exercise to the user” 😛 (Note: hint).

Quicksort and Heapsort, too, can be run on linked lists, but the truth is, it is wonky and shoddy and weird to do so. These are unstable algorithms, which should be a big pointer to the fact that they don’t access data linearly. Their efficiency depends on random access of an array. In a linked list, access of any element is O(n). Looking at Quicksort, this isn’t really too much of a problem, and on a linked list can and has been done… but, it adds a lot of overhead to the constant. And Quicksort’s low constant for average-time complexity, is the only real reason we ever use it in the first place. As for Heapsort, using a linked list would really be unacceptable in the ‘heapify’ stage, and puts a big dent into its already bloated efficiency, perhaps even pushing it to O(n^2) or beyond.

So, why is it that stable sorting algorithms work really well with linked lists?

This is conjecture, but I really think it is because in a stable sort, you compare one element with every other unsorted one. Insertion and bubble do this very obviously. Merge does it less obviously, but it does this, while merging. Stable sorts compare two objects with the same key value directly, and then allocates them into their relative positions. This is best done by going from the start of the array, to the end, i.e. linearly accessing your array. As a linked list has linear access, it is hence a good match for stable sorting algorithms.

 

Welp, that’s all for now, folks!

I’m going to continue this post in “The Sorting Problem, Part 3: unstable Comparison sorting”, where I will deal with Quicksort, Heapsort, and why they were invented in the first place.

Also stay tuned for “The Sorting Problem, Part 4: Why Not Radix?” where I deal with the strange, giggling non-comparative sorts. So, make sure you keep your thumbs pointed to the stars, and your browsers open!

About the author:

Abhishek Divekar,

as a kid, really wanted a dog. But his parents bought him a computer instead. And so he decided to trade one for the other, and has loved computers unconditionally ever since. They, meanwhile, have barely just tolerated him. At least, he reasons, they would never eat his clothes or spill fluids all over his bed. Except for that one time, with the Dell.

The Sorting Problem.

BREAKING NEWS:

Genius mathematicians and Algorithms 101 students concur; sorting is “Not that easy”.

Well, I guess that clears things up.

Thanks for reading.

 One Abe in a bar is worth two Homers in a bush.

But seriously, there are a couple of things that make sorting really hard:

  • You can’t just get it done with in one go; you have to go through your array/tuple multiple times.
  • Every time you move an element, you must check that you don’t accidentally push it into the wrong place and overwrite some other data.
  • Worse, you have to get a machine to do it. And computers are really strange sometimes.
  • And you want your sort to be really fast, too.

These are all reasons why sorting is taught in Algo 101 classes and why brilliant mathematicians are still working on the problem.

It doesn’t mean we have managed to find one perfect way of doing it yet; sorting is still really difficult.

 

Okay, maybe not that difficult.


So, maybe it’s a good idea to start this post with why and how we sort data.

Disclaimer: this post begins with the following long, boring, definition on sorting.

Via Wikipedia:

The main purpose of sorting information is to optimise its usefulness for specific tasks. In general, there are two ways of grouping information: by category e.g. a shopping catalogue where items are compiled together under headings such as ‘home’, ‘sport & leisure’, ‘women’s clothes’ etc. (nominal scale) and by the intensity of some property, such as price, e.g. from the cheapest to most expensive (ordinal scale).

Richard Saul Wurman, in his book Information Anxiety, proposes that the most common sorting purposes are name, by location and by time (these are actually special cases of category and hierarchy). Together these give the acronym LATCH (Location, Alphabetical, Time, Category, Hierarchy) and can be used to describe just about every type of ordered information.

The opposite of sorting, rearranging a sequence of items in a random or meaningless order, is called shuffling.

For sorting, either a weak order “should not come after” or a strict weak order “should come before”, a strong order. For the sorting to be unique, these two are restricted to a total order and a strict total order, respectively.

Generally objects can be sorted based on a property. Such a component or property is called a sort key. For example, the items are books, the sort key is the title, subject or author, and the order is alphabetical. A new sort key can be created from two or more sort keys by lexicographical order. The first is then called the primary sort key, the second the secondary sort key, etc. For example, addresses could be sorted using the city as primary sort key, and the street as secondary sort key…….

…Basically, the small thing should come before the big thing. What the small thing is, depends on the situation.

Sometimes it is an integer. Sometimes it is a character. Sometimes it is a whole string of numbers, or even the addresses of houses.

Sometimes, we even put the big thing before the small thing. This is called sorting in descending order. By default though, when we sort, we sort in ascending order, because it makes a nice sound.

 

 

Types of sorting algorithms:

  1. Comparative sorting algorithms:

    these include (but are not restricted to) insertion, bubble, heap, Shell, merge, heap and quicksort. In these types, every comparison involves comparing entire numbers with one another, and rearranges the numbers (either in-place or using auxiliary i.e. extra memory).
    It has been proved and explained that the average lower-bound of all comparison-based sorting algorithms is Ω(n.logn). There are modifications that can be made to comparison-based sorting algorithms so that they sort in O(n) time when the array is sorted or reverse sorted, but for any random input array, a comparison-based sorting algorithm will not run much better than Θ(n.logn).

  2. Non-comparative sorting algorithms:

    the major ones are bucket sort, counting sort, and radix. Here, we don’t compare the numbers directly…instead we do something else. All the non-comparative sorting algorithms use auxiliary memory, i.e. additional memory (other than our input array). All of them are similar to hashing. And most of them run in linear timeThis is yet another example of the space-time tradeoff that exists in the development of algorithms.

Space and time complexity of all sorting algorithms

 

I really encourage you to see the above link, which is about the space and time complexities of all the various sorting algorithms. It helps a lot to see how they stack up.

Also, I really, really recommend you see this short, informative, fun Youtube video about how fast different sorting algorithms will run in real time. Every algorithms student and programmer should see it. Here is also a playlist of all of them individually and slowed down.

Other than time and space complexity, there is one more criterion that you need to consider while choosing which sorting algorithm to use…

 

 

 


Stability:

Now, an important type of sorting algorithms are stable sorting algorithms, i.e. they maintain the relative order of objects with equal keys. A key is the parameter of the objects with respect to which you wish to sort them.

For example, if you are given a selection of students, presorted by roll-call number. We now sort them by name (first name only). Then, Alice who is #14 will come before Alice who is #27. Robert Ludlum who is #60 will come after Robert Stark who is #35 (remember, we are sorting by first name only) because that is the relative order they were in before we decided to sort the children by name.

 

Stability is more important than you think.

Tower of Hanoi: advanced mode.

Take another example:

You are buying a new cellphone. You check out various candidates weigh different features and operating systems, but in the end it all comes down to one thing: money.

You make a table of all your possible candidates with their respective prices. Initially, you’d probably make a list that looks like this:

Samesung DS4 $530
Samesung NotDifferentAtAll
$750
Samesung Not
$650
Samesung Not3
$750
Samesung DS5
$630
macromin Easel $270
Mapple myPhone 6.0.0.0.2 $1450
Mapple myPhone 5
$900
Mapple myPhone 6 $1150
Karcoal Uranium $330
Voogle Xenus 5
$650
HTTPS Two $890
HTTPS Desire 982 $800
Okai Candella L432 – 320 uberpixel Zion Lens $1000
Blueberry 3.1416 $490

As you can see, these are all grouped by device manufacturer, but they are not sorted otherwise.

Suppose you want to sort this data by price. You could use a stable or unstable sorting algorithm to do this, and you would get a table like this:

macromin Easel
$270
Karcoal Uranium
$330
Blueberry 3.1416
$490
Samesung DS4 $530
Samesung DS5
$630
Voogle Xenus 5 $650
Samesung Not
$650
Samesung Not3
$750
Samesung NotDifferentAtAll
$750
HTTPS Desire 982
$800
HTTPS Two
$890
Mapple myPhone 5
$900
Okai Candella L432 – 320 uberpixel Zion Lens
$1000
Mapple myPhone 6
$1150
Mapple myPhone 6.0.0.0.2 $1450

Now, suppose you want to group them back into company name, but you don’t want them to be jumbled back into how they were in the first one. In other words, you still want them to be sorted by price. This means when you sort a second time, you need to make sure that phones with the same company name retain their relative positions. Meaning:

When sorting with respect to multiple keys in succession, stable sorting algorithms are required.
We first sort with respect to the least important parameter, and then successively the more important parameters.

Our need would require you to run two sorts in succession, first by price (the less important one) and next by company name (the more important one), to get an arrangement like this:

Blueberry 3.1416
$490
HTTPS Desire 982
$800
HTTPS Two
$890
Karcoal Uranium
$330
macromin Easel $270
Mapple myPhone 5
$900
Mapple myPhone 6 $1150
Mapple myPhone 6.0.0.0.2
$1450
Okai Candella L432 – 320 uberpixel Zion Lens
$1000
Samesung DS4
$530
Samesung DS5
$630
Samesung Not
$650
Samesung Not3
$750
Samesung NotDifferentAtAll
$750
Voogle Xenus 5
$650

You can tell which parameters are considered most important by looking at which are the broadest categories. Company name, for example, is the broadest here. Only in case of conflict of company name do we need to add more search parameters. Keeping data sorted with respect to multiple parameters helps searching tremendously, as we can use binary search on each parameter. Companies like Amazon etc. allow you to shop for items in an entirely similar way.

Using stable sorting algorithms hence makes searching faster.

Stable sorting algorithms also have more uses.

  • Radix sort, for example, completely relies on stability to sort (go ahead, try to run Radix by purposely not maintaining stability. The data will never be sorted). So does Bucket sort and Proxmap sort, as well as many more non-comparative sorting algorithms.
  • More than that, we usually like stable sorting algorithms because they make sense, and when they are executed one after the order, using different types of different keys, you get a unique and interesting result. Suppose, you have a list of folders on your computer and you sort them first by ‘name’, and then ‘date modified’. You will get at the top your most recently modified files/folders, not in some random order, but neatly arranged by name.
  • Another important thing about stable sorting algorithms is that you can run them on linked lists.

 Here is a list of all the known stable sorting algorithms. The are mostly varieties of insertion sort, bubble sort, merge sort, radix and bucket sort.


 Conclusion: sorting may be hard, but it is important. And sometimes fun, too.

I am now going to break up my notes on sorting algorithms into three parts, based on what I think are the important categories they fall into. They will be divided in three separate posts. They are:

  • Stable Comparative sorts
  • Unstable Comparative sorts
  • Non-comparative sorts

Let’s get on with part 1…

Binary Trees; When Bark Beats Byte!

In my previous post I analyzed how complexity really makes a difference with some raw mathematical data and visualizations, to help programmers make the right efficiency choices.

I also promised that I wanted to make a post “leafing around the concepts of trees just a little”, and this post does exactly that.

 

Give a man a tree….

For those unfamiliar with trees and their nomenclature, a tree is a data structure which comprises of nodes. Each node holds pointers to all their children nodes, and their parent node. The topmost node, with no parent, is called the ‘root’, and it is the first node inserted into the tree.

A tree by definition can have any number of nodes as its children, but I feel personally that once the maximum number of children per node exceeds two, the problem at hand becomes more of a graphing problem. Hence I shall limit my discussion to trees with at max two children per node, or binary trees. There are a few reasons for this, and not all logical.

  1. A binary tree follows many logical and philosophical dualities: yin and yang, yes and no, him and her, true and false, 1’s and 0’s, etc. that make it relevant amongst the various types of trees.
  2. The gain in insertion, deletion and searching time achieved by implementing a ternary tree (3 children)  is minimal over using a similar binary tree (2 children). They both insert, search and delete in logarithmic time, i.e. O(log n). A binary tree performs these three operations in O(log n) base 2, and a ternary tree does it in O(log n) base 3. As the graph from my last post shows, the gain in time achieved by this is pretty minimal..and a binary tree is much easier to implement.

Binary trees also make very pretty fractal designs.

However, search trees with more than two children (called B-Trees) do exist and are very widely used in databases due to their flexibility; but the upkeep is rather difficult for beginners. Hence, I will explore trees like B-trees, Radix trees and Tries, in other posts.

More than just binary trees, I want to focus on binary search trees, a very efficient data structure, and one I am particularly fond of its simplicity of use and design:

A binary search tree (i.e. BST).

This is a binary tree, courtesy of the prestigious Carnegie Mellon University. The operations for searching, insertion and deletion from a binary search tree all take O(h) wost-case time for a search tree, where ‘h’ is the height of the root, also called the height of the tree. It is also the depth of the tree, which is equal (in magnitude) to the maximum level of a tree.

All these terms are very confusing.

No, really. I have no idea.

Luckily, this link and this link provides a correct, full set of terminology for a tree data structure, and it is free for everyone to see on Wikipedia.

There several types of binary search tree. Pay special attention to perfect (not to be confused with complete) and balanced types of binary tree, as they are relevant.

 

So, why is searching, inserting and deletion from trees so efficient?

This question can be answered by asking why inserting and deleting random data from other data structures is so inefficient;

Inserting data in an ordered way takes O(n) time when we use both arrays and linked lists [in arrays you have to shift the elements around, in linked lists you have to find the position where we want to insert our data if we want to insert it in sorted order].

Meanwhile, inserting, deleting and searching a binary search trees, meanwhile, is O(logn) (base 2 for balanced binary search trees). It is also said to be O(h), h=height of the tree.

But this is just a function, it doesn’t mean much to us. I’m a fan of expressing numbers relative to one another, as it gives a much better perspective when a choice has to be made.

So let’s take an example:

It has been estimated that around 100 billion humans have lived on earth in total. Some people have had no children, and some people, like the utterly badass Mongol warchief Genghis Khan, are thought to be the ancestor to one out of every 200 men alive. But that’s just some people.

Suppose we assume every person of age, has at exactly two children, and all those below age, have none. Let’s also take a break from reality for a sec, and assume that human reproduction is mitotic, like single-celled organisms.

“Don’t look at me, I’m still wondering how the little guy got here.”

If we created records for all these 100 billion people, and inserted them – in the order in which they were born – into a ‘universal tree’, which was sorted according to people’s names, we would get a huge binary search tree.

Now for the fun bit: if we wanted to find any one person’s profile from these 100 billion people, the maximum number of steps we’d have to take from the root node to that person’s node, is O(log n). For base 2, (assuming our tree is balanced) that is log(100 billion) = 37 steps.

We could find the records any single person who ever existed, in less than 38 steps, using a Binary Search Tree. If a new baby is born, we can also insert their records in the correct place in the tree, in 37 or less steps.

That’s amazing.

It gets better…

Tree Sort

One of the best things about a binary search tree is that data is entered in a sorted way, and this lets us obtain the data in sorted order, in O(n) time, by performing an Inorder traversal of the tree.

Inorder traversal: start from F and follow the arrows.

Output: A, B, C, D, E, F, G, H, I

Now, there are several ways to perform such an Inorder traversal; Wikipedia lists one with recursion and one without, both using pseudocode (the one without recursion, uses a stack of node pointers; whereas the recursive one uses the call stack implicitly). Both of these can be easily implemented in a high-level language: the “visit(node)” section of the code could be replaced with a printing statement, or a statement storing that number in an array.

This kind of traversal is what is called a depth-first search (DFS). It is common searching technique, you will often find when dealing with the graph data structure…because the Binary Search Tree is a graph, albeit a simple one.

There are two more noteworthy depth-first traversals: preorder and postorder. I like the preorder more, because:

Note; this only works if you don’t balance your tree.

A quick reference; Note: L always comes before R, i.e. you never right before going left.

Inorder= L-V-R  [Left, Visit, Right]

Preorder=V-L-R

Postorder= L-R-V

These kind of traversals (depth-first) don’t work on infinite trees, i.e. trees with infinite depth, or trees which loop somewhere (this is actually called a cyclic graph, and you will study it when you study graphs). For infinite trees and cyclic graphs, a Breadth-First Traversal (also called level-order traversal) is the only sensible form of traversal. It is also one of the most convenient ways to print your tree. Algorithm here (uses a queue of pointers).

But there’s more.

Inorder traversal, like all traversals, will touch every node in the tree at least once (if only to print it). Using this property, and a suitably modified ‘Node’ class, you can set the height, depth, maximum height, right or left child-ness, presence of a sibling node, the total number of leaf nodes, number of non-leaf nodes, etc…. for all the nodes of your tree, in one simple O(n) traversal.

I made a post to stackoverflow.com with the code (using recursion). You can view it, and add to the discussion (sign in with Google).

 

So, Binary Search Trees. They insert, delete, search and sort very efficiently, in O(log n) time.

Yes, they do.

Did I mention they are my favorite data structure 😀 ?

…What’s the catch?

There, er, is no catch.

Hahaha. HA-HA-HA-HA-HA. HAHAHAHAHAHAHAAHAAHAAHAAHAHAHA…….

Er…….

WHAT’S THE CATCH!?!

Okay!

This sort of…..sort, isn’t perfect.

I mean, the sort is perfect: it traverses the tree  in O(n) time….but the very act of making a tree, inserting each node, is where we have our pitfall. Inserting each node is supposed to be O(log n), or logarithmic time. Inserting ‘n’ nodes should then take O(n.logn) time. This would pit Tree Sort with other optimal comparison sorting algorithms, like Quicksort and Mergesort. But that isn’t always the case.

Suppose you entered your data in a sorted, increasing order. The first element, x[0], will become the root; then x[1] (greater than x[0]) will be its right child; then x[2] (greater than x[1]) will be x[1]’s right child, and so on and so forth.

So now, you get a tree with no left children. Such a tree is said to be skewed.

Inserting into a skewed tree, will taken O(n) time (big ‘O’ = worst case). Your worst case will be that the new element is greater than all the other elements, so it will be the rightmost, and that this will happen for each of the ‘n’ elements you enter.

Hence, the tree degrades into a linked list, with the extra space of our left pointers wasted as well. Worst of all, with a skewed tree, we lose our incredible O(logn) insertion, deletion and searching time…it now becomes O(n). So, to create a tree with ‘n’ elements, we will take O(n^2) time if the tree is skewed. This is terribly slow.

Side note: the statement

inserting, deleting and searching a binary tree takes O(h) time, where ‘h’ is the height of the tree.

is always true, even for skewed trees. However, that is because max height = Ω(logn), not O(logn), i.e. it is equal to the log (base 2) of the inputs in the best case (i.e. a perfectly balanced tree), but not worst case (i.e. a skewed tree). On average case though, ‘h’ is close of even equal to logn, but it depends on the input.

 

Trees as arrays:

Implement a tree as an array, as is done in Heapsort, is an interesting concept, though it doesn’t solve our problem of O(n) insertion time for skewed trees.

When you implement a tree as an array, you gain a few things:

Random access time of an array, which makes some operations easy to implement…Heapsort in particular.

Implementing a tree as an array also makes breadth-first traversal extremely easy: you just go through the array from start to finish, and print every non-empty element.

And, if done carefully, you can maintain the O(logn) insertion, deletion and search time. (Note: this is left as an exercise to the reader, which is a way of saying “I think this is possible but I’m too lazy to try it out myself”.)

The disadvantage of this method:

Since the size of an array is fixed, you have to make the assumption that your tree of height ‘h’ is perfect.

It doesn’t solve our problem with O(n) inserting time for a skewed tree.

The space wasted by implementing a skewed tree as an array is enormous; O(2^h) in the worst case, ‘h’ being the height. And as the table from my last post shows, this reaches over a million elements when ‘h’ becomes as little as 20. So for a very skewed tree, you would need one million elements in your array just to store twenty values in their correct positions.

 

So, is there actually a way to get over the O(n) insertion-deletion-search time barrier?

Yes, thankfully, there is. Every time you insert into a tree, you check if the tree is balanced.

A balanced binary search tree is the opposite of a skewed tree: it tries to maintain its data in order, with each node having the least possible height. 

We could create a function to balance a tree separately when we notice that the tree is becoming increasingly skewed, but this is subject to a lot of fine-tuning and is likely time-consuming. Instead, we could implement one of the several self-balancing binary search trees that already exist. 

A self-balancing tree is one does the balancing automatically: on insertion of a new node, it is checked if each node of the tree is balanced, using a balancing factor. If the tree is unbalanced, some operations must be performed to get the search tree back into its balanced state.

The most widely-studied amongst these are AVL trees and Red-Black trees, though many others exist. AVL trees were developed first, in 1962, just two years after the first formal binary search tree was defined. Red and Black trees were developed a whole 10 years after AVL trees, in 1972. The two are similar, but not identical.

Both follow the basic same structure of inserting/deleting and balancing:Animation of tree rotations taking place.

  1. Insert/delete node (make appropriate changes for deletion).
  2. Check if and where the tree is unbalanced using balancing factor (relative heights of children for AVL trees, red/black-ness for Red-Black trees).
  3. If unbalanced, perform the necessary tree rotation operations until it is balanced.
  4. Doing so, we get the tree back into a proper shape relative to the number of nodes it has, and not slanting to one side.

 

L-R and R-L Tree Rotation animations

AVL trees balance in O(log n) time – same as insertion time, so we could insert two elements in that time – but allow for fast searching as they are more ‘perfect’ with a smaller maximum height, which is limited to ~1.44lg(n+2) (mathematically proven).

Red-Black trees balance in O(1) and are built for faster insertion/deletions, with a less time given to balancing the tree tightly. Their height is limited to ~2lg(n+1) (mathematically proven). Hence, their time of lookup is slower.

Both AVL and Red-Black trees balance in O(logn) time! Hence, we can maintain our tree, all the while maintaining our O(log n) search, insertion and deletion time! 😀

Note: B-Trees can also be made self-balancing!

Sources on notes on self-balancing trees: 1 2 3

How to implement AVL trees, courtesy GeekForGeeks.

 

Welp, that’s all folks!

I would really like to go into depth about self-balancing trees, but unfortunately, I’ve run out of time and space! In future posts, I hope to fully explore AVL and Red-Black trees, as well as the all-important B-Trees and the strange-looking Tries. Until then though, keep your browsers open and your thumbs pointed to the starts!

About the author:

Abhishek Divekar,

is a student and occasional class clown at VJTI, Mumbai. When he is not being hunted by telecom marketers, he spends his free time exploring the web, reading and writing technical notes, making funny videos and ‘fixing’ the computers of all those who know him.

Complexity: why and where it matters.

I started this blog for those interested in computer science, and coming from that background, I expect you’ll be interested in how an algorithm’s complexity really makes a difference. I also expect that you know what I’m talking about when I say complexity. If you don’t, it would really be a bother to have to explain it all over again, but luckily I don’t have to!
Here are a few links from Wikipedia and Cornell University to get you started.

Once you’re through with that, check out this table I made, which observes how some of the most commonly-used computational complexity functions stack up against each other, as the number of inputs ‘n’ increases:

Note: this table contains exact values, but only the order of magnitude is more important, because the same algorithm might be coded in different ways, by different programmers, and hence the exact function of ‘n’ which describes the algorithm’s complexity may differ with different codes. However, if the method of approaching the problem in all of these codes is the same, the asymptotic notations O(n), O(nlogn) etc, will be the same as well.

 

Observing function values at various values of ‘n’:

n = n! 2^n n^3 (n^2).log(n) [base:2] n^2 n.log(n) [base 2] n log(n) [base 2]
1 1 2 1 0 1 0 1 0
2 2 4 8 4.1 4 2 2 1
3 6 8 27 14.265 9 4.755 3 1.585
6 720 64 216 93.059 36 15.51 6 2.585
10 3.6288 E6 1024 1000 322.193 100 33.219 10 3.322
15 1.308 E12 32,768 3,375 879.5 224 58.603 15 3.907
21 5.1091 E19 2.097 E6 9,261 1,937.012 441 92.239 21 4.392
28 3.048 E29 2.684 E8 21,952 3,768.966 784 134.606 28 4.807
36 3.72 E41 6.872 E10 46,656 6,700.223 1,296 186.117 36 5.17
45 1.962 E56 3.5124 E13 91,125 11,121.003 2,205 274.133 45 5.492
55 1.26964 E73 3.603 E16 166,375 17,488.613 3,025 317.975 55 5.781
66 5.443 E92 7.739 E19 287,496 26,329 4,356 398.93 66
78 1.132 E115 3.022 E23 474,552 38,240 6,084 490.26 78
91 1.352 E140 2.476 E27 753,571 53,891 8,281 592.21 91
105 1.081 E168 4.056 E31 1.157 E6 74,024 11,025 704.996 105 6.714
120 6.69 E198 1.329 E36 120
136 8.711 E40 131,089 18,496 963.895 136
153 1.142 E46 3.58 E6 153
171 2.993 E51 216,905 29,241 1,268 171
190 9.68 E351 1.569 E57 273,271 190
210 1.6455 E63 9.0261 E6 340,198 44,100 1,620 210 7.714
skip +21 to +23. .. .. .. .. .. .. .. ..
300 3.0606 E614 2.037 E90 2.7 E7 740,793 90,000 2,468 300 8.229
skip +25 .. .. .. .. .. .. .. ..
351 4.324 E7 1.042 E6 123,201 2,968 351
skip +27 to +30
496 1.22 E8 2.203 E6 246,016 4,441 496 8.954
skip +32 to +35
666 1.0106 E1593 3.062 E200 2.954 E8 4.16 E6 443,556 6,246 666 9.379
skip +37 to +39
820 5.5139 E8 6.5085 E6 672,400 7,937 820 9.6795
skip +41 to +44
1035 3.682 E311 1.109 E9 1.073 E7 1.071 E6 10,365 1035 10.0154
skip +46 to +54 …. …. …. …. …. …. …. ….
1540 3.652 E9 2.51 E7 2.372 E6 16,306 1540 10.589
skip +56 to +68 …. …. …. …. …. …. …. ….
2415 1.4084 E10 6.554 E7 5.83 E6 27,139 2415
skip +70 to +88 …. …. …. …. …. …. …. ….
4005 6.424 E10 1.92 E8 1.604 E7 47,930 4005 11.968
skip +90 to +114 ….. ….. ….. ….. ….. ….. ….. …..
6670 2.9674 E11 5.65 E8 4.45 E7 84,732 6670 12.7035
skip +116 to +140 ….. ….. ….. ….. ….. ….. ….. …..
10,011 2.865 E37 4.086 E3013 1.0033 E12 1.332 E9 1.002 E8 133,039 10,011 13.289
25,000 1.5625 E13 9.131 E9 6.25 E8 365,241 25,000 14.61
100,000 E15 1.661 E11 E10 1.661 E6 100,000 16.61
250,000 1.5625 E16 1.121 E12 6.25 E10 4.483 E6 250,000 17.93
E6 8.264 E(5,565,708) 9.9 E(301,029) E18 1.993 E13 E12 1.993 E7 E6 19.932

 

 

 

I really love this table because it shows just how massive n! and 2^n really are in comparison with the other functions.

I’ve also put in bold where the number of steps required for that function exceeds a million:

  • For n^2, it happens when the input is a thousand.For n.log(n) (base 2), it is around a hundred thousand.But for 2^n it happens at n=20, and for n!, n=10.This just goes to show why complexity is key in writing your algorithms. There are a set of problems called NP-complete problems (like the Travelling Salesman) which are million-dollar problems, said to be unsolvable (theoretically) and even with dynamic programming algorithms, they take either O(2^n) or O(n!) time. It’s also speculated that solving any one problem, indirectly solves all the others.
  • For log(n) (base 2), it isn’t shown, but basic logarithmic maths should point you to the last row in the 2^n column. See that number (9.9 E301029)? That’s how many inputs we must have, for the number of steps our O(logn) program to exceed a million steps.
    Putting that into perspective, the distance from the Earth to the Sun is around 1.5 E20 ….. picometers.
    This gives you an idea of just how efficient binary search and inserting into a binary search tree, etc, really is. If we have 100 billion sorted integer numbers (which will take about 20 GB of hard drive space) binary search will find the right number in just 37 steps.
  • The simple logarithmic function is just about as close to O(1) you can get, which is why a binary search tree is so very, very efficient in finding an element, inserting an element, deleting an element, etc. The heap, which is most commonly used to visualize heap-sort – an O(n.log(n)) sorting algorithm), is a binary tree.
    In fact, there are very few applications in which your complexity of using a balanced Binary Search Tree exceeds O(n). That’s why the BST is my favorite data structure.

I’ve often wondered about whether it’s worthwhile making a ternary search tree, so I drew up a graph using the great graphing tool, Desmos graphing calculator.

In the big picture (see below), you can see what I found; that it doesn’t make much difference which base you use, logartihmic time is pretty similar.

 

Maths note: log(2) ~=0.3 (base 10)

I think I’ve made my point, but if you’re still unconvinced, check this out:

You probably know that merge and heap sort are so very efficient with their O(n.log(n)) worst-time complexity, but I think the numbers do the real talking:

  • To reach a million steps, O(n.log(n))-complexity merge/heap sort, require n=100,000 elements as input.
  • Bubble/insertion/quick sort with O(n^2) complexity require only an input of n=1,000 to reach 1 million steps.

That’s a hundredfold gap . . . but then you notice that non-comparative sorts like radix, bucket and counting sort, would sort in O(n), i.e. a function relative to the number of elements you have in the input array. . . .whereas it has been proven that comparative sorting algorithms will max out at O(n.log n) complexity.

So why do the other sorting algorithms even exist?

It’s worth noting that even the best O(n.log(n)) comparative and O(n) non-comparative sorts, have a space-complexity of O(n), as compared to the O(1) space complexity of sorting algorithms like bubble/insertion/quick sort, which take O(n^2) time. Also, they all have their special sets of weaknesses.

Which is pretty bad. I mean, okay, all sorting algorithms must take O(n) space complexity at least – to store their data structure of ‘n’ elements – but the auxiliary (i.e. ‘extra’) space required for most O(n.log(n)) & O(n) sorts, is also O(n). And recursive implementation of a sorting algorithm takes O(n^2) space complexity, because the previous states are saved on the system’s function call stack.

The exception is heap-sort, where if you build the heap in-place (i.e. you modify the original array you are given), the space-complexity becomes O(1) i.e. you only need a fixed number of variables required to implement the sort, and your time-complexity doesn’t exceed O(n.log(n)) steps.

So my Completely Unrequested Recommendation (CUR) is: use Heap Sort! 😛

In actuality, there exist so many different sorting algorithms mostly because when you consider things like the average case, the limitations of different systems, and functionality…

We need to use different sorts in different situations.

Quicksort may have quadratic time…however, that is only in the worst case. In the average case, it is O(n.logn) or better. Heapsort, meanwhile, will be O(n.logn) always, because the indirect method it takes towards sorting an array. In real life – such as the Arrays.sort() function in Java – Quicksort is used most often.

Then there’s stability.

Merge, Radix, bubble and insertion are stable, but Heap sort is not. So for specific applications, you need a different kind of sort.

And another thing:

People keep finding new ways to improve them.

For example: Quicksort also gives an average-case complexity of n.log(n) (base 2), which is optimal and doable by selecting your ‘pivot’ correctly optimally. Then, someone came along and realized that if you choose your pivot as the median of your first, last and middle elements, you get sightly better results. If you select your pivot randomly, it gets safer, as you would foil the plans of someone who purposely wanted to hack your system, by causing a worst-case input every time (it’s happened…people are weird).

Still, if you know anything about algorithms, you know that calculating average case correctly is as difficult as teaching a dog how to play a piano . . . and not half as rewarding!

Welp, that’s all folks! I’m going to be trying to leaf around with all the concepts of trees in my next post, so keep your thumbs pointed to the stars and your browsers open!

 

Abhishek Divekar,

is a student at Veermata Jijabai Technological Institute. When he’s not juggling submissions and inhaling canteen food, he is an avid reader and spends his time learning about the world in general and computers in particular.