pg_dump out of shared memory

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Thomas F. O'Connell

    pg_dump out of shared memory

    In using pg_dump to dump an existing postgres database, I get the
    following:

    pg_dump: WARNING: out of shared memory
    pg_dump: attempt to lock table <table name> failed: ERROR: out of
    shared memory
    HINT: You may need to increase max_locks_per_t ransaction.

    postgresql.conf just has the default of 1000 shared_buffers. The
    database itself has thousands of tables, some of which have rows
    numbering in the millions. Am I correct in thinking that, despite the
    hint, it's more likely that I need to up the shared_buffers?

    Or is it that pg_dump is an example of "clients that touch many
    different tables in a single transaction" [from

    and I actually ought to abide by the hint?

    -tfo
  • Tom Lane

    #2
    Re: pg_dump out of shared memory

    tfo@alumni.brow n.edu (Thomas F. O'Connell) writes:[color=blue]
    > In using pg_dump to dump an existing postgres database, I get the
    > following:[/color]
    [color=blue]
    > pg_dump: WARNING: out of shared memory
    > pg_dump: attempt to lock table <table name> failed: ERROR: out of
    > shared memory
    > HINT: You may need to increase max_locks_per_t ransaction.[/color]
    [color=blue]
    > Am I correct in thinking that, despite the
    > hint, it's more likely that I need to up the shared_buffers?[/color]

    No.

    regards, tom lane

    ---------------------------(end of broadcast)---------------------------
    TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddres sHere" to majordomo@postg resql.org)

    Comment

    • Tom Lane

      #3
      Re: pg_dump out of shared memory

      tfo@alumni.brow n.edu (Thomas F. O'Connell) writes:[color=blue]
      > In using pg_dump to dump an existing postgres database, I get the
      > following:[/color]
      [color=blue]
      > pg_dump: WARNING: out of shared memory
      > pg_dump: attempt to lock table <table name> failed: ERROR: out of
      > shared memory
      > HINT: You may need to increase max_locks_per_t ransaction.[/color]
      [color=blue]
      > Am I correct in thinking that, despite the
      > hint, it's more likely that I need to up the shared_buffers?[/color]

      No.

      regards, tom lane

      ---------------------------(end of broadcast)---------------------------
      TIP 2: you can get off all lists at once with the unregister command
      (send "unregister YourEmailAddres sHere" to majordomo@postg resql.org)

      Comment

      • Thomas F. O'Connell

        #4
        Re: pg_dump out of shared memory

        tfo@alumni.brow n.edu (Thomas F. O'Connell) wrote in message news:[color=blue]
        > postgresql.conf just has the default of 1000 shared_buffers. The
        > database itself has thousands of tables, some of which have rows
        > numbering in the millions. Am I correct in thinking that, despite the
        > hint, it's more likely that I need to up the shared_buffers?[/color]

        So the answer here, verified by Tom Lane and my own remedy to the
        problem, is "no". Now I'm curious: why does pg_dump require that
        max_connections * max_shared_lock s_per_transacti on be greater than the
        number of objects in the database? Or if that's not the right
        assumption about how pg_dump is working, how does pg_dump obtain its
        locks, and why is the error that it runs out of shared memory? Is
        there a portion of shared memory that's set aside for locks? What is
        the shared lock table?

        -tfo

        Comment

        • Tom Lane

          #5
          Re: pg_dump out of shared memory

          tfo@alumni.brow n.edu (Thomas F. O'Connell) writes:[color=blue]
          > Now I'm curious: why does pg_dump require that
          > max_connections * max_shared_lock s_per_transacti on be greater than the
          > number of objects in the database?[/color]

          Not objects, just tables. pg_dump takes AccessShareLock (the weakest
          kind of lock) on each table it intends to dump. This is basically
          just to prevent someone from dropping the table underneath it. (It
          would actually have to take that lock anyway as a byproduct of reading
          the table contents, but we grab the locks ASAP during pg_dump startup
          to reduce the risks of problems from concurrent drops.)

          On a database with thousands of tables, this could easily require more
          locks than the default lock table size can hold. Most normal apps don't
          need more than a few tables locked within any one transaction, which is
          why the table size is calculated as a multiple of max_connections .
          There's a great deal of slop involved, because we pad the shared memory
          size by 100K or so which is room for quite a few more lock entries than
          the nominal table size ... but eventually you'll run out of room.

          regards, tom lane

          ---------------------------(end of broadcast)---------------------------
          TIP 8: explain analyze is your friend

          Comment

          Working...