postgresql locks the whole table!

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Stephan Szabo

    #16
    Re: postgresql locks the whole table!

    On Sun, 7 Dec 2003, Greg Stark wrote:
    [color=blue]
    > It's not strictly necessary to have a list of all xids at all. The normal
    > "shared read lock" is just "take the write lock, increment the readers
    > counter, unlock" Anyone who wants to write has to wait (using, eg, a condition
    > variable) until the readers count goes to 0.[/color]

    There are some storage/cleanup questions though. If that's stored in the
    tuple header, what happens after a crash?

    In addition, how should the locks be granted for a sequence like:
    T1: get shared lock on row A
    T2: get exclusive lock on row A
    T3: get shared lock on row A
    Does T3 get the lock or not? If it does, then you have the possibility of
    freezing out T2 for a very long time and badly hurting update/delete
    performance. If it doesn't, then how are you keeping track of the fact
    that there are one or more people who want exclusive locks on the same
    row that are "in front" of you?
    [color=blue]
    > This gets the right semantics but without the debugging info of a list of
    > lockers. Other than debugging the only advantage I see to having the list of
    > lockers is for deadlock detection. Is that absolutely mandatory?[/color]

    I think so, yes, especially if we're going to use it for things like
    foreign keys. It's too easy to get into a deadlock with foreign keys
    (even when implemented through shared locks) and I think having undetected
    deadlocks would be even worse than our current behavior. At least with
    the current behavior you get an indication that something is wrong.

    ---------------------------(end of broadcast)---------------------------
    TIP 7: don't forget to increase your free space map settings

    Comment

    • Tom Lane

      #17
      Re: postgresql locks the whole table!

      > Greg Stark wrote:[color=blue][color=green]
      >> This gets the right semantics but without the debugging info of a list of
      >> lockers. Other than debugging the only advantage I see to having the list of
      >> lockers is for deadlock detection. Is that absolutely mandatory?[/color][/color]

      No, deadlock detection is not optional.

      Mike Mascari <mascarm@mascar i.com> writes:[color=blue]
      > What happens if a backend is killed and never decrements its reference
      > count?[/color]

      Even if it's not killed, how does it know to decrement the reference
      count? You still need a list of all locked tuples *somewhere*. Perhaps
      a technique like this would allow the list to not be in shared memory,
      which is helpful, but it's far from an ideal solution.

      regards, tom lane

      ---------------------------(end of broadcast)---------------------------
      TIP 8: explain analyze is your friend

      Comment

      • Greg Stark

        #18
        Re: postgresql locks the whole table!

        Stephan Szabo <sszabo@megazon e.bigpanda.com> writes:
        [color=blue]
        > In addition, how should the locks be granted for a sequence like:
        > T1: get shared lock on row A
        > T2: get exclusive lock on row A
        > T3: get shared lock on row A
        > Does T3 get the lock or not? If it does, then you have the possibility of
        > freezing out T2 for a very long time and badly hurting update/delete
        > performance.[/color]

        Well this is a fundamental question that applies to any scheme to handle
        shared locks. You get into all sorts of fun stuff like livelock and priority
        inversion that real time systems folk invent just to torture programmers.

        --
        greg


        ---------------------------(end of broadcast)---------------------------
        TIP 2: you can get off all lists at once with the unregister command
        (send "unregister YourEmailAddres sHere" to majordomo@postg resql.org)

        Comment

        Working...