Large Table Index Rebuild Problem

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • fahimghauri
    New Member
    • Mar 2008
    • 8

    Large Table Index Rebuild Problem

    Hi all,

    I DB2 v8.1 FixPack 15 on Linux. I have a table which has over 6 Million rows When I query that table it takes too much time even if I query for first 10 rows only. It also gives error "Log File has reached its saturation point", and ADM5543E Index rebuild of indexes on table.

    Some times it crashes the database manager.

    Please help how can I manage that table efficiently???
    and why the database manager crashes???

    Sincerely,
    Fahim
  • sakumar9
    Recognized Expert New Member
    • Jan 2008
    • 127

    #2
    Originally posted by fahimghauri
    Hi all,

    I DB2 v8.1 FixPack 15 on Linux. I have a table which has over 6 Million rows When I query that table it takes too much time even if I query for first 10 rows only. It also gives error "Log File has reached its saturation point", and ADM5543E Index rebuild of indexes on table.

    Some times it crashes the database manager.

    Please help how can I manage that table efficiently???
    and why the database manager crashes???

    Sincerely,
    Fahim
    Things that you can try:
    1. Use proper indexes. You can decide upon the search criteria. You can also take help of visual explain to check if the indexes that you have created are being used or not. Visual explain will also give you the time consumed at each level of operation. This will help you in deciding the suitable candidates for indexes.
    2. Log file reaching saturation point: Check the type of logging that you are using. You can use circular logging or infinite logging whichever suits you. Using infinite logging will prevent log full error. I am not sure if this can cause dbm crash.
    3. Enable INTRA_PARALLEL so that the index rebuilding is done with parallel processing.
    4. Tune the sortheap and sheapthres database configuration parameters to control space for sorts. Since each processor will perform a private sort, the value of sheapthres should be at least sortheap x the number of processors used.
    5. Ensure dirty index pages are getting cleaned from the buffer pool as soon as possible by tuning the number of page cleaners.
    6. Your table might have gone through many updates and inserts/deletes. It is possible that your table is fragmented. You can reorg the table to get rid of these fragments.
    7. Refer to this link: http://publib.boulder. ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw .admin.perf.doc/doc/c0008170.html

    Let me know if your problem is solved now.

    Regards
    -- Sanjay

    Comment

    • fahimghauri
      New Member
      • Mar 2008
      • 8

      #3
      Thanks Sanjay,

      I am really grateful to you, you replied with details.

      I'll try these tips in evening, Will tell you the result.

      Thanks again

      Fahim

      Comment

      Working...