PEP 249 - DB API question

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • k3xji

    PEP 249 - DB API question

    Hi all,

    As development goes on for a server project, it turns out that I am
    using the MySQLDB and DB interactions excessively. One questions is
    just bothering me, why don't we have a timeout for queries in PEP 249
    (DB API)?

    Is it really safe to wait for a query to finish, means, is it always
    returning, even if the DB server goes down?

    And, also from my point view, it may be a good feature. We may use
    long/non-critical DB queries with a timeout and slow/critical without
    a timeout. This will give us a little chance to prioritize/consume
    queries on their criticality? And, I don't see so much effort in
    implementing this. One has to change the socket logic in the related
    DB's API source code?

    What do you think?

    Thanks
  • James Mills

    #2
    Re: PEP 249 - DB API question

    On Wed, Nov 5, 2008 at 3:52 AM, k3xji <sumerc@gmail.c omwrote:
    As development goes on for a server project, it turns out that I am
    using the MySQLDB and DB interactions excessively. One questions is
    just bothering me, why don't we have a timeout for queries in PEP 249
    (DB API)?
    Because not all database engines support this ?
    Is it really safe to wait for a query to finish, means, is it always
    returning, even if the DB server goes down?
    Try using the non-blocking features (may be RDBMS specific)
    And, also from my point view, it may be a good feature. We may use
    long/non-critical DB queries with a timeout and slow/critical without
    a timeout. This will give us a little chance to prioritize/consume
    queries on their criticality? And, I don't see so much effort in
    implementing this. One has to change the socket logic in the related
    DB's API source code?
    Patches are welcome. A suggestion:

    Try spawning a new process to run your query
    in. Use the multiprocessing library. Your main
    application can then just poll the db/query processes
    to see if they're a) finished and b) have a result

    Your application server can also c0 kill long running
    queries that are "deemed" to be taking "too long"
    and may not finish (eg: Cartesian Joins).

    --JamesMills

    --
    --
    -- "Problems are solved by method"

    Comment

    • M.-A. Lemburg

      #3
      Re: PEP 249 - DB API question

      On 2008-11-04 18:52, k3xji wrote:
      Hi all,
      >
      As development goes on for a server project, it turns out that I am
      using the MySQLDB and DB interactions excessively. One questions is
      just bothering me, why don't we have a timeout for queries in PEP 249
      (DB API)?
      >
      Is it really safe to wait for a query to finish, means, is it always
      returning, even if the DB server goes down?
      >
      And, also from my point view, it may be a good feature. We may use
      long/non-critical DB queries with a timeout and slow/critical without
      a timeout. This will give us a little chance to prioritize/consume
      queries on their criticality? And, I don't see so much effort in
      implementing this. One has to change the socket logic in the related
      DB's API source code?
      >
      What do you think?
      This would be a question for the Python DB-SIG mailing list.

      Things like timeouts and handling of these is generally something
      that is very database specific. It is difficult to provide a reliable
      way of configuring this and may very well not even be within the
      scope of a database API (e.g. because the timeout has to be
      configured in the database server using some config file).

      I'd suggest you check whether MySQL provides a way to set timeouts
      and you then just use that for your project.

      --
      Marc-Andre Lemburg
      eGenix.com

      Professional Python Services directly from the Source (#1, Nov 04 2008)
      >>Python/Zope Consulting and Support ... http://www.egenix.com/
      >>mxODBC.Zope.D atabase.Adapter ... http://zope.egenix.com/
      >>mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
      _______________ _______________ _______________ _______________ ____________

      :::: Try mxODBC.Zope.DA for Windows,Linux,S olaris,MacOSX for free ! ::::


      eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
      D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
      Registered at Amtsgericht Duesseldorf: HRB 46611

      Comment

      • k3xji

        #4
        Re: PEP 249 - DB API question

        Try spawning a new process to run your query
        in. Use the multiprocessing library. Your main
        application can then just poll the db/query processes
        to see if they're a) finished and b) have a result
        >
        Your application server can also c0 kill long running
        queries that are "deemed" to be taking "too long"
        and may not finish (eg: Cartesian Joins).
        Just thinking loudly:...

        More backward-compatible way to do that is to have a thread
        pool of threads running queries and the main pool thread is
        polling to see if the child threads are taking too long to
        complete? However, from performance point of view this will
        be a nightmare? You have a good reason to suggest
        multiprocessing , right? But at least I can implement my
        critical queries with this kind of design, as they are not
        so many.

        Good idea, thanks...

        Comment

        • James Mills

          #5
          Re: PEP 249 - DB API question

          On Wed, Nov 5, 2008 at 6:13 AM, k3xji <sumerc@gmail.c omwrote:
          >
          >Try spawning a new process to run your query
          >in. Use the multiprocessing library. Your main
          >application can then just poll the db/query processes
          >to see if they're a) finished and b) have a result
          >>
          >Your application server can also c0 kill long running
          >queries that are "deemed" to be taking "too long"
          >and may not finish (eg: Cartesian Joins).
          >
          Just thinking loudly:...
          >
          More backward-compatible way to do that is to have a thread
          pool of threads running queries and the main pool thread is
          polling to see if the child threads are taking too long to
          complete? However, from performance point of view this will
          be a nightmare? You have a good reason to suggest
          multiprocessing , right? But at least I can implement my
          critical queries with this kind of design, as they are not
          so many.
          I hate thread :) To be perfectly honest, I would
          use processes for performance reasons, and it
          were me, I would use my new shiny circuits [1]
          library to trigger events when the queries are done.

          --JamesMills

          [1] http://trac.softcircuit.com.au/circuits/

          --
          --
          -- "Problems are solved by method"

          Comment

          • Lawrence D'Oliveiro

            #6
            Re: PEP 249 - DB API question

            In message <mailman.3470.1 225823782.3487. python-list@python.org >, James
            Mills wrote:
            Try spawning a new process to run your query in.
            One approach might be to have two processes: the worker process and the
            watcher process. The worker does the work, of course. Before performing any
            call that may hang, the worker sends a message to the watcher: "if you
            don't hear back from me in x seconds, kill me". It then does the call.
            After the call, it sends another message to the watcher: "OK, I'm back,
            cancel the timeout".

            Comment

            Working...