Log write batch performance problem DB2 V8 for AIX

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Mats Mohlin

    Log write batch performance problem DB2 V8 for AIX

    We are runing a critical batchjob in DB2 V8 fp4 for AIX on a p630 with
    AIX 5.2
    The job updates only 6 columns of the 100 columns in a 600 bytes wide
    row.
    The job is very IO bound because of log writes. We seems to be logging
    full rows 600 bytes per updated row.
    I thought DB2 should be clever enough to only log the modified columns
    ??
    We also seems to write to the log with a lot of small writes, filemon
    -v -O lf output indicates
    that we write just 18 KB per IO.
    The batchjob updates 290 000 rows (read/update) and it does just 58
    commit
    We can only reach a rate of 1000 rows/sec and we need to increase this.

    I have tried to increase log buffer size, this is our values
    DBHEAP 10000 logretain OFF LOGBUFSZ 4000
    logprimary 14
    logsecond 20 logfilsiz 8000
    How can we get DB2 to log less and write more on each log write.

    Mats Mohlin IBM Sweden

  • Anton Versteeg

    #2
    Re: Log write batch performance problem DB2 V8 for AIX

    Mats,
    Just wondering:
    Is the sequence of your updates the same as the physical sequence of the
    rows in the database?
    How exactly are you updating? Based on cursor or on key?

    Mats Mohlin wrote:
    [color=blue]
    >We are runing a critical batchjob in DB2 V8 fp4 for AIX on a p630 with
    >AIX 5.2
    >The job updates only 6 columns of the 100 columns in a 600 bytes wide
    >row.
    >The job is very IO bound because of log writes. We seems to be logging
    >full rows 600 bytes per updated row.
    >I thought DB2 should be clever enough to only log the modified columns
    >??
    >We also seems to write to the log with a lot of small writes, filemon
    >-v -O lf output indicates
    >that we write just 18 KB per IO.
    >The batchjob updates 290 000 rows (read/update) and it does just 58
    >commit
    >We can only reach a rate of 1000 rows/sec and we need to increase this.
    >
    >I have tried to increase log buffer size, this is our values
    > DBHEAP 10000 logretain OFF LOGBUFSZ 4000
    >logprimary 14
    > logsecond 20 logfilsiz 8000
    >How can we get DB2 to log less and write more on each log write.
    >
    >Mats Mohlin IBM Sweden
    >
    >
    >[/color]

    --
    Anton Versteeg
    IBM Certified DB2 Specialist
    IBM Netherlands


    Comment

    • Mark A

      #3
      Re: Log write batch performance problem DB2 V8 for AIX

      > >We are runing a critical batchjob in DB2 V8 fp4 for AIX on a p630 with[color=blue][color=green]
      > >AIX 5.2
      > >The job updates only 6 columns of the 100 columns in a 600 bytes wide
      > >row.
      > >The job is very IO bound because of log writes. We seems to be logging
      > >full rows 600 bytes per updated row.
      > >I thought DB2 should be clever enough to only log the modified columns
      > >??
      > >We also seems to write to the log with a lot of small writes, filemon
      > >-v -O lf output indicates
      > >that we write just 18 KB per IO.
      > >The batchjob updates 290 000 rows (read/update) and it does just 58
      > >commit
      > >We can only reach a rate of 1000 rows/sec and we need to increase this.
      > >
      > >I have tried to increase log buffer size, this is our values
      > > DBHEAP 10000 logretain OFF LOGBUFSZ 4000
      > >logprimary 14
      > > logsecond 20 logfilsiz 8000
      > >How can we get DB2 to log less and write more on each log write.
      > >
      > >Mats Mohlin IBM Sweden
      > >[/color][/color]
      Doesn't DB2 log all data from the changed column to the end of the row? Or
      maybe that has changed? I know that is how DB2 for OS/90 worked at one time.

      I would do commits more often, perhaps once every 100-500 updates.

      As previously mentioned, it is important to process the updates in the same
      order as the physical sequence of the table, if possible. This is defined by
      the clustering index, or the index used in a reorg if no clustering index is
      defined.

      Also if the varchar columns are updated (and made larger), make sure you
      have enough percent free defined for the table to handle that, otherwise the
      row many need to be relocated (very expensive).

      Percent free on the indexes is also important if they are changed in the
      update statement.


      Comment

      Working...