Replaceing records

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Greg Stark

    #16
    Re: Replaceing records


    Richard Ellis <rellis9@yahoo. com> writes:
    [color=blue]
    > True, if the row does not already exist. But in that situation,
    > because of the unique constraint premise in the original quote, there
    > is always going to be at least one failed transaction. So the battle
    > is already lost before it's even begun.[/color]

    Well, no, that was the point. Ideally he wants to try to catch the duplicate
    without producing an exception because he can't do nested transactions.

    There's no parent record to this record in another table? You could lock the
    parent record with SELECT FOR UPDATE, then do a SELECT count(*) on this table,
    and do the insert or update as appropriate, then release the lock on the
    parent record.

    That's not great if you're doing lots of inserts on the same parent record, or
    if the parent record is being updated frequently, but it's way better than
    doing a table lock.

    FWIW:

    Jan Wieck <JanWieck@Yahoo .com> writes:
    [color=blue]
    > Defining a column UNIQUE is a last line of defense, and aborted actions
    > because of constraint violation should be the exception, not the normal mode
    > of operation.[/color]

    Well that's one approach. I don't agree. The database is a tool, unique key
    constraints are a tool, they're good at doing certain things, like ensuring
    atomic semantics for cases just like this. Why try to reinvent the wheel using
    inferior tools in the application. You're doomed to fail and introduce race
    conditions.

    In fact in this situation I usually prefer to try the insert and handle
    exceptions over any of the other approaches. It's cleaner, clearer, faster in
    the normal case, and has the least likelihood of race conditions (none if the
    table never has deletes).
    [color=blue]
    > Wherever one is using this "REPLACE INTO" language violation, the client
    > application or even something in front of it is generating ID's but it's not
    > sure if it is sending down a new or existing one. The real question is "why is
    > this piece of garbage unable to tell the ID is newly created or has to exist
    > already?"[/color]

    Well, because that's the database's job. If the application tried to do that
    it would have to solve all the same concurrency and atomicity issues that the
    database already solves it. That's why I'm using a database in the first
    place.


    --
    greg


    ---------------------------(end of broadcast)---------------------------
    TIP 3: if posting/reading through Usenet, please send an appropriate
    subscribe-nomail command to majordomo@postg resql.org so that your
    message can get through to the mailing list cleanly

    Comment

    • Csaba Nagy

      #17
      Re: Replaceing records

      [rant mode]
      I have to answer this: I'm not trying to use a non-standard feature, I
      try to solve a problem. Namely to be able to try to insert and on
      failure continue the transaction. This is by no means a non-standard
      feature.
      AFAIKT the standard says nothing about rolling back automatically a
      transaction on error, it just says that YOU should be able to roll it
      back or commit it, and then all or nothing of the changes should be
      executed.
      The application design can be "fixed", but that means ugly workarounds.
      In my case a simple fix would be to always insert all the possible
      records before any update would happen, but that would bloat the table
      10-fold - I think you agree this is unacceptable.
      Please understand me: I'm not after pissing off the postgres developers
      by telling Postgres is not up to it, I try to insist that nested
      transactions are a very important feature, which can solve lots of
      problems which apparently might have nothing to do with nested
      transactions.

      Cheers,
      Csaba.


      On Fri, 2003-09-05 at 04:38, Jan Wieck wrote:[color=blue]
      > Whatever you guy's try or suggest, it's doomed to suffer.
      >
      > The whole problem stems from using a non-standard feature. And in my
      > opinion MySQL's "REPLACE INTO" is less a feature or extension to the
      > standard than more another stupid and lesser thought through addition of
      > apparently speed gaining crap at the cost of proper design.
      >
      > One possible reason why this sort of "feature" was left out of the SQL
      > standard could be that the source of an ID, that is supposed to be
      > unique in the end, should by default ensure it's uniqueness. Defining a
      > column UNIQUE is a last line of defense, and aborted actions because of
      > constraint violation should be the exception, not the normal mode of
      > operation. If it's the DB to ensure uniqueness, it has to generate the
      > ID and one can use a sequence. If it's the application to generate it,
      > the application should know if this is an INSERT or an UPDATE.
      >
      > Wherever one is using this "REPLACE INTO" language violation, the client
      > application or even something in front of it is generating ID's but it's
      > not sure if it is sending down a new or existing one. The real question
      > is "why is this piece of garbage unable to tell the ID is newly created
      > or has to exist already?"
      >
      > I don't think there should be a way to subsitute this. Fix the
      > application design instead.
      >
      >
      > Jan[/color]



      ---------------------------(end of broadcast)---------------------------
      TIP 4: Don't 'kill -9' the postmaster

      Comment

      • Peter Childs

        #18
        Re: Replaceing records

        Fine says he seeing both sides and wanting to back both sides.

        REPLACE INTO throws away data with out the user knowing that they
        have ditched the data. This means it has side effects. Standard
        Programming Theory says that functions should not have unintended side
        REPLACE INTO is in-fact a

        DELETE followed by a INSERT

        Its also unclear what replace should do with missing fields
        1. Fill them in with the defaults.
        2. Leave them alone.

        If 1 its very dangerous and if 2 you should know what you are
        changing anyway and the very least it should return "UPDATE" or "INSERT"
        depending on what "REPLACE" actually did!

        On the other hand Nesting are a good idea. There is one problem
        however. When do you impose referential integrity for deferred checks. at
        the last commit. or do you need a check references command.

        Its not the point that a UPDATE on most databases infers a COPY,
        CHANGE COPY, DELETE steps so you can roll back if nessessary.

        Replace also needs to know the table stucture to work Update,
        Insert and Delete don't they only need to check the constraints.

        As I'm sure I've said before SQL has huge holes and inconsistencies
        and needs a complete re-write. its like VHS-Video not the best just whats
        been sold to everyone.

        Peter Childs


        On 5 Sep 2003, Csaba Nagy wrote:
        [color=blue]
        > [rant mode]
        > I have to answer this: I'm not trying to use a non-standard feature, I
        > try to solve a problem. Namely to be able to try to insert and on
        > failure continue the transaction. This is by no means a non-standard
        > feature.
        > AFAIKT the standard says nothing about rolling back automatically a
        > transaction on error, it just says that YOU should be able to roll it
        > back or commit it, and then all or nothing of the changes should be
        > executed.
        > The application design can be "fixed", but that means ugly workarounds.
        > In my case a simple fix would be to always insert all the possible
        > records before any update would happen, but that would bloat the table
        > 10-fold - I think you agree this is unacceptable.
        > Please understand me: I'm not after pissing off the postgres developers
        > by telling Postgres is not up to it, I try to insist that nested
        > transactions are a very important feature, which can solve lots of
        > problems which apparently might have nothing to do with nested
        > transactions.
        >
        > Cheers,
        > Csaba.
        >
        >
        > On Fri, 2003-09-05 at 04:38, Jan Wieck wrote:[color=green]
        > > Whatever you guy's try or suggest, it's doomed to suffer.
        > >
        > > The whole problem stems from using a non-standard feature. And in my
        > > opinion MySQL's "REPLACE INTO" is less a feature or extension to the
        > > standard than more another stupid and lesser thought through addition of
        > > apparently speed gaining crap at the cost of proper design.
        > >
        > > One possible reason why this sort of "feature" was left out of the SQL
        > > standard could be that the source of an ID, that is supposed to be
        > > unique in the end, should by default ensure it's uniqueness. Defining a
        > > column UNIQUE is a last line of defense, and aborted actions because of
        > > constraint violation should be the exception, not the normal mode of
        > > operation. If it's the DB to ensure uniqueness, it has to generate the
        > > ID and one can use a sequence. If it's the application to generate it,
        > > the application should know if this is an INSERT or an UPDATE.
        > >
        > > Wherever one is using this "REPLACE INTO" language violation, the client
        > > application or even something in front of it is generating ID's but it's
        > > not sure if it is sending down a new or existing one. The real question
        > > is "why is this piece of garbage unable to tell the ID is newly created
        > > or has to exist already?"
        > >
        > > I don't think there should be a way to subsitute this. Fix the
        > > application design instead.
        > >
        > >
        > > Jan[/color]
        >
        >
        >
        > ---------------------------(end of broadcast)---------------------------
        > TIP 4: Don't 'kill -9' the postmaster
        >[/color]


        ---------------------------(end of broadcast)---------------------------
        TIP 1: subscribe and unsubscribe commands go to majordomo@postg resql.org

        Comment

        • Csaba Nagy

          #19
          Re: Replaceing records

          Thinking about it, there's probably no easy way to avoid race conditions
          (in a true transactional DB at least) when inserting into a table with
          unique constraints. The REPLACE syntax will definitely not do it,
          because I can't imagine what it should do when 2 threads try to REPLACE
          the same key in concurrent transactions. Both will see the key as
          missing, and try to insert it, so back we are at the same problem INSERT
          has.

          Cheers,
          Csaba.

          On Fri, 2003-09-05 at 12:06, Peter Childs wrote:[color=blue]
          > REPLACE INTO throws away data with out the user knowing that they
          > have ditched the data. This means it has side effects. Standard
          > Programming Theory says that functions should not have unintended side
          > REPLACE INTO is in-fact a
          >
          > DELETE followed by a INSERT
          >
          > Its also unclear what replace should do with missing fields
          > 1. Fill them in with the defaults.
          > 2. Leave them alone.
          >
          > If 1 its very dangerous and if 2 you should know what you are
          > changing anyway and the very least it should return "UPDATE" or "INSERT"
          > depending on what "REPLACE" actually did!
          >
          > On the other hand Nesting are a good idea. There is one problem
          > however. When do you impose referential integrity for deferred checks. at
          > the last commit. or do you need a check references command.
          >
          > Its not the point that a UPDATE on most databases infers a COPY,
          > CHANGE COPY, DELETE steps so you can roll back if nessessary.
          >
          > Replace also needs to know the table stucture to work Update,
          > Insert and Delete don't they only need to check the constraints.
          >
          > As I'm sure I've said before SQL has huge holes and inconsistencies
          > and needs a complete re-write. its like VHS-Video not the best just whats
          > been sold to everyone.
          >
          > Peter Childs[/color]



          ---------------------------(end of broadcast)---------------------------
          TIP 1: subscribe and unsubscribe commands go to majordomo@postg resql.org

          Comment

          • Jan Wieck

            #20
            Re: Replaceing records

            It was not meant against anyone in person and I agree that nested
            transactions and/or catchable exceptions and continuing afterwards is
            usefull and missing in PostgreSQL. What Stephan and Richard where
            actually discussing was more like emulating the REPLACE INTO, and I was
            responding to that.

            However, even with nested transactions and exceptions and all that, your
            problem will not be cleanly solvable. You basically have 2 choices,
            trying the INSERT first and if that fails with a duplicate key then do
            the UPDATE, or try the UPDATE first and if no rows got hit do an INSERT.
            Now if 2 concurrent transactions do try the UPDATE they can both not
            find the row and do INSERT - one has a dupkey error. But if you try to
            INSERT and get a duplicate key, in the time between you get the error
            and issue the UPDATE someone else can issue a DELETE - the row is gone
            and your UPDATE will fail.

            So you have to go into a loop and try INSERTorUPDATEo rINSERT... until
            you either get bored or succeed ... that's not exactly what I call a
            solution.


            Jan

            Csaba Nagy wrote:[color=blue]
            > [rant mode]
            > I have to answer this: I'm not trying to use a non-standard feature, I
            > try to solve a problem. Namely to be able to try to insert and on
            > failure continue the transaction. This is by no means a non-standard
            > feature.
            > AFAIKT the standard says nothing about rolling back automatically a
            > transaction on error, it just says that YOU should be able to roll it
            > back or commit it, and then all or nothing of the changes should be
            > executed.
            > The application design can be "fixed", but that means ugly workarounds.
            > In my case a simple fix would be to always insert all the possible
            > records before any update would happen, but that would bloat the table
            > 10-fold - I think you agree this is unacceptable.
            > Please understand me: I'm not after pissing off the postgres developers
            > by telling Postgres is not up to it, I try to insist that nested
            > transactions are a very important feature, which can solve lots of
            > problems which apparently might have nothing to do with nested
            > transactions.
            >
            > Cheers,
            > Csaba.
            >
            >
            > On Fri, 2003-09-05 at 04:38, Jan Wieck wrote:[color=green]
            >> Whatever you guy's try or suggest, it's doomed to suffer.
            >>
            >> The whole problem stems from using a non-standard feature. And in my
            >> opinion MySQL's "REPLACE INTO" is less a feature or extension to the
            >> standard than more another stupid and lesser thought through addition of
            >> apparently speed gaining crap at the cost of proper design.
            >>
            >> One possible reason why this sort of "feature" was left out of the SQL
            >> standard could be that the source of an ID, that is supposed to be
            >> unique in the end, should by default ensure it's uniqueness. Defining a
            >> column UNIQUE is a last line of defense, and aborted actions because of
            >> constraint violation should be the exception, not the normal mode of
            >> operation. If it's the DB to ensure uniqueness, it has to generate the
            >> ID and one can use a sequence. If it's the application to generate it,
            >> the application should know if this is an INSERT or an UPDATE.
            >>
            >> Wherever one is using this "REPLACE INTO" language violation, the client
            >> application or even something in front of it is generating ID's but it's
            >> not sure if it is sending down a new or existing one. The real question
            >> is "why is this piece of garbage unable to tell the ID is newly created
            >> or has to exist already?"
            >>
            >> I don't think there should be a way to subsitute this. Fix the
            >> application design instead.
            >>
            >>
            >> Jan[/color][/color]

            --
            #============== =============== =============== =============== ===========#
            # It's easier to get forgiveness for being wrong than for being right. #
            # Let's break this rule - forgive me. #
            #============== =============== =============== ====== JanWieck@Yahoo. com #


            ---------------------------(end of broadcast)---------------------------
            TIP 2: you can get off all lists at once with the unregister command
            (send "unregister YourEmailAddres sHere" to majordomo@postg resql.org)

            Comment

            • Csaba Nagy

              #21
              Re: Replaceing records

              > You're completely right on that not all possible problems are solved by[color=blue]
              > this, but different solutions are better or worse based also on the odds
              > for the problem to actually occur. My code can deal with broken
              > transactions, it's just unacceptable if they are broken too often - that
              > slows down the system. You must admit that the odds of the
              > insert-update-delete to happen at the same time is much lower than just
              > 2 inserts happening at the same time, whatever the application usage
              > pattern would be. In particular, it's fairly easy to make sure there's
              > no delete when updates happen: select the row for update. Only the[/color]
              ^^^^^^^^^^^^^^^ ^^^^^^^^^^
              Of course that's stupid. When you do an update it selects the row for
              update... and that will not help in this case.
              But the update will not fail. It will just have nothing to update,
              which usually is just alright if the row was deleted, meaning that it's
              life time ended.
              BTW, in my particular problem I can make sure there will be no delete
              until all insert/updates are finished.
              [color=blue]
              > insert is the problem cause you don't have the row to lock beforehand.
              >
              > Cheers,
              > Csaba.
              >[/color]



              ---------------------------(end of broadcast)---------------------------
              TIP 7: don't forget to increase your free space map settings

              Comment

              • Csaba Nagy

                #22
                Re: Replaceing records

                On Fri, 2003-09-05 at 15:29, Jan Wieck wrote:[color=blue]
                > However, even with nested transactions and exceptions and all that, your
                > problem will not be cleanly solvable. You basically have 2 choices,
                > trying the INSERT first and if that fails with a duplicate key then do
                > the UPDATE, or try the UPDATE first and if no rows got hit do an INSERT.
                > Now if 2 concurrent transactions do try the UPDATE they can both not
                > find the row and do INSERT - one has a dupkey error. But if you try to
                > INSERT and get a duplicate key, in the time between you get the error
                > and issue the UPDATE someone else can issue a DELETE - the row is gone
                > and your UPDATE will fail.
                >
                > So you have to go into a loop and try INSERTorUPDATEo rINSERT... until
                > you either get bored or succeed ... that's not exactly what I call a
                > solution.[/color]

                You're completely right on that not all possible problems are solved by
                this, but different solutions are better or worse based also on the odds
                for the problem to actually occur. My code can deal with broken
                transactions, it's just unacceptable if they are broken too often - that
                slows down the system. You must admit that the odds of the
                insert-update-delete to happen at the same time is much lower than just
                2 inserts happening at the same time, whatever the application usage
                pattern would be. In particular, it's fairly easy to make sure there's
                no delete when updates happen: select the row for update. Only the
                insert is the problem cause you don't have the row to lock beforehand.

                Cheers,
                Csaba.



                ---------------------------(end of broadcast)---------------------------
                TIP 3: if posting/reading through Usenet, please send an appropriate
                subscribe-nomail command to majordomo@postg resql.org so that your
                message can get through to the mailing list cleanly

                Comment

                • Ron Johnson

                  #23
                  Re: Replaceing records

                  On Fri, 2003-09-05 at 08:29, Jan Wieck wrote:[color=blue]
                  > It was not meant against anyone in person and I agree that nested
                  > transactions and/or catchable exceptions and continuing afterwards is
                  > usefull and missing in PostgreSQL. What Stephan and Richard where
                  > actually discussing was more like emulating the REPLACE INTO, and I was
                  > responding to that.
                  >
                  > However, even with nested transactions and exceptions and all that, your
                  > problem will not be cleanly solvable. You basically have 2 choices,
                  > trying the INSERT first and if that fails with a duplicate key then do
                  > the UPDATE, or try the UPDATE first and if no rows got hit do an INSERT.
                  > Now if 2 concurrent transactions do try the UPDATE they can both not
                  > find the row and do INSERT - one has a dupkey error. But if you try to
                  > INSERT and get a duplicate key, in the time between you get the error
                  > and issue the UPDATE someone else can issue a DELETE - the row is gone
                  > and your UPDATE will fail.[/color]

                  SERIALIZABLE transactions will solve this.

                  --
                  -----------------------------------------------------------------
                  Ron Johnson, Jr. ron.l.johnson@c ox.net
                  Jefferson, LA USA

                  Thanks to the good people in Microsoft, a great deal of the data
                  that flows is dependent on one company. That is not a healthy
                  ecosystem. The issue is that creativity gets filtered through
                  the business plan of one company.
                  Mitchell Baker, "Chief Lizard Wrangler" at Mozilla


                  ---------------------------(end of broadcast)---------------------------
                  TIP 4: Don't 'kill -9' the postmaster

                  Comment

                  • Tom Lane

                    #24
                    Re: Replaceing records

                    Csaba Nagy <nagy@ecircle-ag.com> writes:[color=blue]
                    > Thinking about it, there's probably no easy way to avoid race conditions
                    > (in a true transactional DB at least) when inserting into a table with
                    > unique constraints. The REPLACE syntax will definitely not do it,
                    > because I can't imagine what it should do when 2 threads try to REPLACE
                    > the same key in concurrent transactions. Both will see the key as
                    > missing, and try to insert it, so back we are at the same problem INSERT
                    > has.[/color]

                    Assuming that you've got a unique constraint defined, one thread will
                    succeed in doing the INSERT, and the other will fail with a duplicate
                    key error --- whereupon it should loop back and try the REPLACE part
                    again. So what this all comes down to is having control over recovery
                    from a dup-key error. You have to be able to not have that abort your
                    whole transaction.

                    regards, tom lane

                    ---------------------------(end of broadcast)---------------------------
                    TIP 5: Have you checked our extensive FAQ?



                    Comment

                    • Jan Wieck

                      #25
                      Re: Replaceing records



                      Ron Johnson wrote:
                      [color=blue]
                      > On Fri, 2003-09-05 at 08:29, Jan Wieck wrote:[color=green]
                      >> It was not meant against anyone in person and I agree that nested
                      >> transactions and/or catchable exceptions and continuing afterwards is
                      >> usefull and missing in PostgreSQL. What Stephan and Richard where
                      >> actually discussing was more like emulating the REPLACE INTO, and I was
                      >> responding to that.
                      >>
                      >> However, even with nested transactions and exceptions and all that, your
                      >> problem will not be cleanly solvable. You basically have 2 choices,
                      >> trying the INSERT first and if that fails with a duplicate key then do
                      >> the UPDATE, or try the UPDATE first and if no rows got hit do an INSERT.
                      >> Now if 2 concurrent transactions do try the UPDATE they can both not
                      >> find the row and do INSERT - one has a dupkey error. But if you try to
                      >> INSERT and get a duplicate key, in the time between you get the error
                      >> and issue the UPDATE someone else can issue a DELETE - the row is gone
                      >> and your UPDATE will fail.[/color]
                      >
                      > SERIALIZABLE transactions will solve this.[/color]

                      Sure will they.

                      Care to elaborate a bit about the side effects of SERIALIZABLE? I mean
                      semantics AND performance wise ... people tend to use suggestions like
                      this without thinking (about the consequences).


                      Jan :-T

                      --
                      #============== =============== =============== =============== ===========#
                      # It's easier to get forgiveness for being wrong than for being right. #
                      # Let's break this rule - forgive me. #
                      #============== =============== =============== ====== JanWieck@Yahoo. com #


                      ---------------------------(end of broadcast)---------------------------
                      TIP 3: if posting/reading through Usenet, please send an appropriate
                      subscribe-nomail command to majordomo@postg resql.org so that your
                      message can get through to the mailing list cleanly

                      Comment

                      • Jan Wieck

                        #26
                        Re: Replaceing records



                        Ron Johnson wrote:
                        [color=blue]
                        > On Fri, 2003-09-05 at 08:29, Jan Wieck wrote:[color=green]
                        >> It was not meant against anyone in person and I agree that nested
                        >> transactions and/or catchable exceptions and continuing afterwards is
                        >> usefull and missing in PostgreSQL. What Stephan and Richard where
                        >> actually discussing was more like emulating the REPLACE INTO, and I was
                        >> responding to that.
                        >>
                        >> However, even with nested transactions and exceptions and all that, your
                        >> problem will not be cleanly solvable. You basically have 2 choices,
                        >> trying the INSERT first and if that fails with a duplicate key then do
                        >> the UPDATE, or try the UPDATE first and if no rows got hit do an INSERT.
                        >> Now if 2 concurrent transactions do try the UPDATE they can both not
                        >> find the row and do INSERT - one has a dupkey error. But if you try to
                        >> INSERT and get a duplicate key, in the time between you get the error
                        >> and issue the UPDATE someone else can issue a DELETE - the row is gone
                        >> and your UPDATE will fail.[/color]
                        >
                        > SERIALIZABLE transactions will solve this.[/color]

                        Sure will they.

                        Care to elaborate a bit about the side effects of SERIALIZABLE? I mean
                        semantics AND performance wise ... people tend to use suggestions like
                        this without thinking (about the consequences).


                        Jan :-T

                        --
                        #============== =============== =============== =============== ===========#
                        # It's easier to get forgiveness for being wrong than for being right. #
                        # Let's break this rule - forgive me. #
                        #============== =============== =============== ====== JanWieck@Yahoo. com #


                        ---------------------------(end of broadcast)---------------------------
                        TIP 3: if posting/reading through Usenet, please send an appropriate
                        subscribe-nomail command to majordomo@postg resql.org so that your
                        message can get through to the mailing list cleanly

                        Comment

                        • Ron Johnson

                          #27
                          Re: Replaceing records

                          On Wed, 2003-09-10 at 00:31, Jan Wieck wrote:[color=blue]
                          > Ron Johnson wrote:
                          >[color=green]
                          > > On Fri, 2003-09-05 at 08:29, Jan Wieck wrote:[color=darkred]
                          > >> It was not meant against anyone in person and I agree that nested
                          > >> transactions and/or catchable exceptions and continuing afterwards is
                          > >> usefull and missing in PostgreSQL. What Stephan and Richard where
                          > >> actually discussing was more like emulating the REPLACE INTO, and I was
                          > >> responding to that.
                          > >>
                          > >> However, even with nested transactions and exceptions and all that, your
                          > >> problem will not be cleanly solvable. You basically have 2 choices,
                          > >> trying the INSERT first and if that fails with a duplicate key then do
                          > >> the UPDATE, or try the UPDATE first and if no rows got hit do an INSERT.
                          > >> Now if 2 concurrent transactions do try the UPDATE they can both not
                          > >> find the row and do INSERT - one has a dupkey error. But if you try to
                          > >> INSERT and get a duplicate key, in the time between you get the error
                          > >> and issue the UPDATE someone else can issue a DELETE - the row is gone
                          > >> and your UPDATE will fail.[/color]
                          > >
                          > > SERIALIZABLE transactions will solve this.[/color]
                          >
                          > Sure will they.
                          >
                          > Care to elaborate a bit about the side effects of SERIALIZABLE? I mean
                          > semantics AND performance wise ... people tend to use suggestions like
                          > this without thinking (about the consequences).[/color]

                          Well, unless INSERT/UPDATE/DELETE transactions are very short, there
                          will definitely be a performance hit because of increased locking.

                          However, I prefer that consequence rather than the artifacts from
                          READ COMMITTED.

                          --
                          -----------------------------------------------------------------
                          Ron Johnson, Jr. ron.l.johnson@c ox.net
                          Jefferson, LA USA

                          LUKE: Is Perl better than Python?
                          YODA: No... no... no. Quicker, easier, more seductive.
                          LUKE: But how will I know why Python is better than Perl?
                          YODA: You will know. When your code you try to read six months
                          from now.


                          ---------------------------(end of broadcast)---------------------------
                          TIP 1: subscribe and unsubscribe commands go to majordomo@postg resql.org

                          Comment

                          • Ron Johnson

                            #28
                            Re: Replaceing records

                            On Wed, 2003-09-10 at 00:31, Jan Wieck wrote:[color=blue]
                            > Ron Johnson wrote:
                            >[color=green]
                            > > On Fri, 2003-09-05 at 08:29, Jan Wieck wrote:[color=darkred]
                            > >> It was not meant against anyone in person and I agree that nested
                            > >> transactions and/or catchable exceptions and continuing afterwards is
                            > >> usefull and missing in PostgreSQL. What Stephan and Richard where
                            > >> actually discussing was more like emulating the REPLACE INTO, and I was
                            > >> responding to that.
                            > >>
                            > >> However, even with nested transactions and exceptions and all that, your
                            > >> problem will not be cleanly solvable. You basically have 2 choices,
                            > >> trying the INSERT first and if that fails with a duplicate key then do
                            > >> the UPDATE, or try the UPDATE first and if no rows got hit do an INSERT.
                            > >> Now if 2 concurrent transactions do try the UPDATE they can both not
                            > >> find the row and do INSERT - one has a dupkey error. But if you try to
                            > >> INSERT and get a duplicate key, in the time between you get the error
                            > >> and issue the UPDATE someone else can issue a DELETE - the row is gone
                            > >> and your UPDATE will fail.[/color]
                            > >
                            > > SERIALIZABLE transactions will solve this.[/color]
                            >
                            > Sure will they.
                            >
                            > Care to elaborate a bit about the side effects of SERIALIZABLE? I mean
                            > semantics AND performance wise ... people tend to use suggestions like
                            > this without thinking (about the consequences).[/color]

                            Well, unless INSERT/UPDATE/DELETE transactions are very short, there
                            will definitely be a performance hit because of increased locking.

                            However, I prefer that consequence rather than the artifacts from
                            READ COMMITTED.

                            --
                            -----------------------------------------------------------------
                            Ron Johnson, Jr. ron.l.johnson@c ox.net
                            Jefferson, LA USA

                            LUKE: Is Perl better than Python?
                            YODA: No... no... no. Quicker, easier, more seductive.
                            LUKE: But how will I know why Python is better than Perl?
                            YODA: You will know. When your code you try to read six months
                            from now.


                            ---------------------------(end of broadcast)---------------------------
                            TIP 1: subscribe and unsubscribe commands go to majordomo@postg resql.org

                            Comment

                            Working...