which of these 2 quicksorts is faster?

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • process

    which of these 2 quicksorts is faster?

    qsort can handle bigger lists it seems, making less recursive calls
    before finishing(quick sort blows the stack when sorting
    range(100,-1000,-1).
    qsort does more work though right? is there a way to speed up that?

    is the built-in sort not defined recursively?

    def quicksort(lista ):
    if len(lista) != 0:
    return quicksort([x for x in lista[1:] if x < lista[0]]) +[lista[0]] + \
    quicksort([x for x in lista[1:] if x >= lista[0]])
    else:
    return []

    def qsort(lista):
    l = len(lista)
    if len(lista) != 0:
    return qsort([x for x in lista[:l/2]+lista[l/2+1:] if x <
    lista[l/2]]) + \[lista[l/2]] + \
    qsort([x for x in lista[:l/2]+lista[l/2+1:] if x >=
    lista[l/2]])
    else:
    return []
  • Fredrik Lundh

    #2
    Re: which of these 2 quicksorts is faster?

    process wrote:
    qsort can handle bigger lists it seems, making less recursive calls
    before finishing(quick sort blows the stack when sorting
    range(100,-1000,-1).
    qsort does more work though right? is there a way to speed up that?
    >
    is the built-in sort not defined recursively?
    what makes you think you can write a better sort than the built-in
    algorithm by typing in some toy quick-sort implementations from a
    "sorting for dummies" article?

    </F>

    Comment

    • process

      #3
      Re: which of these 2 quicksorts is faster?

      On Sep 10, 12:29 pm, Fredrik Lundh <fred...@python ware.comwrote:
      process wrote:
      qsort can handle bigger lists it seems, making less recursive calls
      before finishing(quick sort blows the stack when sorting
      range(100,-1000,-1).
      qsort does more work though right? is there a way to speed up that?
      >
      is the built-in sort not defined recursively?
      >
      what makes you think you can write a better sort than the built-in
      algorithm by typing in some toy quick-sort implementations from a
      "sorting for dummies" article?
      >
      </F>
      Where did I write that I was trying to do that? I am merely trying to
      learn.

      Get some social skills dude.

      Comment

      • Marc 'BlackJack' Rintsch

        #4
        Re: which of these 2 quicksorts is faster?

        On Wed, 10 Sep 2008 03:17:30 -0700, process wrote:
        qsort can handle bigger lists it seems, making less recursive calls
        before finishing(quick sort blows the stack when sorting
        range(100,-1000,-1).
        It just seems so because that `range()` list is the worst case for
        `quicksort()` but not for `qsort()`. If you feed `qsort()` a list
        constructed to always leave one recursive call with the empty list, it
        will reach the recursion limit too.

        Ciao,
        Marc 'BlackJack' Rintsch

        Comment

        • Christian Heimes

          #5
          Re: which of these 2 quicksorts is faster?

          Fredrik Lundh wrote:
          what makes you think you can write a better sort than the built-in
          algorithm by typing in some toy quick-sort implementations from a
          "sorting for dummies" article?
          Anybody who can FULLY understand Tim's text at

          can write a better sorting algorithm ... *scnr*

          :]

          Christian

          Comment

          • Christian Heimes

            #6
            Re: which of these 2 quicksorts is faster?

            Fredrik Lundh wrote:
            what makes you think you can write a better sort than the built-in
            algorithm by typing in some toy quick-sort implementations from a
            "sorting for dummies" article?
            Anybody who can FULLY understand Tim's text at

            can write a better sorting algorithm ... *scnr*

            :]

            Christian

            Comment

            • Terry Reedy

              #7
              Re: which of these 2 quicksorts is faster?



              process wrote:
              qsort can handle bigger lists it seems, making less recursive calls
              before finishing(quick sort blows the stack when sorting
              range(100,-1000,-1).
              qsort does more work though right? is there a way to speed up that?
              If you are worried about speed, these 'neat' functional definitions are
              *not* the way to go. The standard in-place procedural version makes NO
              copies of the list and no concatenations of sorted pieces. It also
              scans and partitions the list once instead of twice for each call.
              is the built-in sort not defined recursively?
              Definition != implementation. It is trivial to turn the second
              recursive call into iteration. More work and an explicit stack (easy in
              Python) will do the same for the second.
              def quicksort(lista ):
              if len(lista) != 0:
              For speed, don't 'sort' a list of length 1. In fact, for speed,
              special-case lists of length 2 and possibly even longer 'short' lists.
              return quicksort([x for x in lista[1:] if x < lista[0]]) +[lista[0]] + \
              quicksort([x for x in lista[1:] if x >= lista[0]])
              else:
              return []
              >
              def qsort(lista):
              l = len(lista)
              In some fonts, 1 and l are extremely similar, so I initially read l/2
              below as 1/2. Any of L or ln or n or sz would be clearer.
              if len(lista) != 0:
              return qsort([x for x in lista[:l/2]+lista[l/2+1:] if x <
              lista[l/2]]) + \[lista[l/2]] + \
              qsort([x for x in lista[:l/2]+lista[l/2+1:] if x >=
              lista[l/2]])
              else:
              return []
              The difference between your versions is the deterministic choice of
              pivot element in the (reduncant) double partitioning. It is generally
              better to pick one at random each time or possibly use the median value
              of the first, middle, and last. Either way, a consistently bad choice
              that leads to unbalanced partitions and deep recursion is less likely.

              Terry Jan Reedy

              Comment

              Working...