Each of the sorting algorithms we considered last time (selection,
bubble, and insertion sort) has nested loops, where each loop's
length depends on the size on the list being sorted (N). This means
the number of steps required for each sort, or the runtime, depends
on the size of the list squared. These are called quadratic or
N-squared sorts, because if the size of the list doubles, the number
of steps quadruples. For example, if the runtime of the bubble
sort was 10 seconds for a list of size N
, it would take approximately
40 seconds to sort a list of size 2N
.
Let's see if we can show this by actually timing a few sorts. Here is the code we will use to time a particular sort:
from time import time
t1 = time()
selectionSort(L)
t2 = time()
print "selection sort time: %8.4f" % (t2-t1)
If we do the above for lists of size N
, 2N
, and 4N
, we
should see the quadratic behavior.
$ python timesorts.py
N: 2000
selection sort time: 0.1416
bubble sort time: 0.4503
insertion sort time: 0.2378
python sort time: 0.0003
$ python timesorts.py
N: 4000
selection sort time: 0.5836
bubble sort time: 1.7433
insertion sort time: 0.9037
python sort time: 0.0007
$ python timesorts.py
N: 8000
selection sort time: 2.2732
bubble sort time: 7.1467
insertion sort time: 4.2049
python sort time: 0.0015
Notice three things:
How does the python L.sort()
method work? Are there other, better
sorting algorithms?
Here is the basic merge sort algorithm:
The merge step is actually not that hard, given two sorted lists. If you know each list is already sorted, you can just compare the first item in each list, and take the smaller of the two.
A more interesting question is, how do we sort each of the split lists? Do we use selection sort, bubble sort, or some other sort?
Since we are talking about merge sort, can we use merge sort to sort
each of the split lists? This is called recursion: we are writing
a function (mergeSort()
), and somewhere in that function we call
that function (or a copy of that function) to help solve the problem.
This may seem odd, but it is perfectly fine. If you think about stack
diagrams, calling any function just puts another function frame on the
stack. Assuming there is an end somewhere, it is perfectly fine for a
function to call itself and put another copy on the stack.
So here is a version of the mergeSort()
function, assuming we have
a valid merge()
function, which is not hard to write (we will write
it next week):
def mergeSort(L):
"""use merge sort to sort L from low to high"""
if len(L) > 1:
# split into two lists
half = len(L)/2
L1 = L[0:half]
L2 = L[half:]
# sort each list
mergeSort(L1)
mergeSort(L2)
# merge them back into one sorted list
merge(L1,L2,L)
We will write this and test it next week, after we learn all about recursion...