Performance bottleneck in bulk updates/inserts

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • shreyask
    New Member
    • Sep 2008
    • 8

    Performance bottleneck in bulk updates/inserts

    I have been working on doing bulk updates and inserts through sqlapi++ using a bulk udpate extention library, developed, a long time ago, internally. The database is oracle. The bulk inserts/updates are done in batches of 50K. I am facing a very peculiar performance issue with regard to these bulk operations.

    The initial batch of bulk insert (50K records) is finished within 3 seconds; while the next similar batch for bulk insert or update (50 records again) takes a whopping 373 seconds. Using 'top' on a RHEL-AS-4 server, I could see that it's oracle process that takes up the whole 373 secods for completion; so sqlapi++ or the internally developed extention is not the culprit.

    The third batch of 50K records in the sequence take a lot more time (913 seconds). The time goes on increasing exponentially; and there doesnt seem to be >any< pattern out of it.

    Surprisingly, this is not consistent. On a good day, I can have the seconds batch going through in 3 seconds. All records intact and perfect without any sort of defect in the data. In fact, all of the next batches would finish in or around 3 seconds.

    Yet more surprisingly, if I truncate the table and start the process, the performance problem would reappear. It would again start taking 370-380 seconds for 2nd batch. Again, if I had used 'delete from' query instead of 'truncate table' query for deleteing all the records from the table, there wouldn't be any problem!

    So in short, I came to conclusion that the performance bottleneck occurs when the table is truncated (or is created brand new), and not when all records are deleted using 'delete from' query.

    Any guesses why it could be occuring? I confess I am not very good at databases, so any help would be very much appreciated.

    Thanks in advance.

    -
    Shreyas
  • debasisdas
    Recognized Expert Expert
    • Dec 2006
    • 8119

    #2
    you need to check for any blocking locks in the tababase.

    Comment

    • shreyask
      New Member
      • Sep 2008
      • 8

      #3
      Originally posted by debasisdas
      you need to check for any blocking locks in the tababase.
      i tried a 'select * from dba_waiters' and 'select * from dba_blockers'; but the views returned no rows. so i suppose there are no blocking locks, correct me if i am wrong.
      i also rechecked for commits after sessions, and found they were in place. in fact, this bottleneck occurs only when the table is truncated. that is for sure.

      what i dont understand is what could oracle be doing differently in truncating the table and deleting records from the table using 'delete from' query.

      Comment

      • Dave44
        New Member
        • Feb 2007
        • 153

        #4
        Originally posted by shreyask
        i tried a 'select * from dba_waiters' and 'select * from dba_blockers'; but the views returned no rows. so i suppose there are no blocking locks, correct me if i am wrong.
        i also rechecked for commits after sessions, and found they were in place. in fact, this bottleneck occurs only when the table is truncated. that is for sure.

        what i dont understand is what could oracle be doing differently in truncating the table and deleting records from the table using 'delete from' query.
        when you truncate a table in oracle it removes all the allocated space from the table and puts it back into the free list. therefore the inserts need to go and request new space before it can insert. While i have seen this process slow things down some i wouldnt think that requesting enough space for 50K rows would take that long. what all is on the same I/O path that the datafiles you are using are on.

        also using top to see that oracle is running a process on the oracle server doesnt really tell you a lot because if you installed the database under the linux user oracle then all the various processes will be oracle. you should (or get a dba to) perform a trace of your session so you can see what is taking all the time. AWR reports can also be a good indicator as to where all your time is being taken up.

        Comment

        • shreyask
          New Member
          • Sep 2008
          • 8

          #5
          yes, i had tried truncate with space reuse.

          anyways, the problem has been resolved. if you are interested, you can have a look at this thread on OTN -
          Performance bottleneck in bulk updates-inserts

          Comment

          Working...