GLOBAL TEMPORARY table - serious join performance problem

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Jim Garrison

    GLOBAL TEMPORARY table - serious join performance problem

    Scenario:

    1) Create a GLOBAL TEMPORARY table and populate it with
    one (1) row.

    2) Join that table to another with about 1 million rows.
    The join condition selects a few hundred rows.

    Performance: 4 seconds, the system is doing a full-table
    scan of the second table, and the Explain Plan output
    indicates that Oracle thinks the first table has 4000
    rows.

    Now replace the GLOBAL TEMPORARY table with a real table
    and repeat exactly the same query. This runs in 94 milliseconds
    and the Explain Plan shows the correct number of rows for
    the driving table and an index scan on the second table, as
    I would have expected.

    Can anyone suggest a solution that will make the GLOBAL TEMPORARY
    implementation as fast as the real table version?

    BTW, why are there two sets of parallel groups under both
    comp.database.o racle and comp.databases. oracle?

    Jim Garrison
    jhg@athensgroup .com
  • Hans Forbrich

    #2
    Re: GLOBAL TEMPORARY table - serious join performance problem

    Jim Garrison wrote:
    >
    >
    BTW, why are there two sets of parallel groups under both
    comp.database.o racle and comp.databases. oracle?
    comp.database.o racle was created a couple of years ago by accident. Some
    ISPs allowed users to create newsgroups on demand. SOme people still inist
    on using that.

    comp.databases. oracle itself has been voted out, in favor of
    comp.databases. oracle.* heirarchy, so those of us answering could get a bit
    of discrimination on the questions.


    Comment

    • Jim Garrison

      #3
      Re: GLOBAL TEMPORARY table - serious join performance problem

      Jim Garrison wrote:
      Scenario:
      >
      1) Create a GLOBAL TEMPORARY table and populate it with
      one (1) row.
      >
      2) Join that table to another with about 1 million rows.
      The join condition selects a few hundred rows.
      [snip]

      I found references to one solution, which is to set the
      table statistics (numrows, specifically) manually using
      DBMS_STATS.SET_ TABLE_STATS. While this works, it appears
      that table statistics are shared among all instances.
      That is, even though every session gets its own copy of
      the data, there's only one copy of the stats. Two
      sessions with greatly differing rowcounts will step
      on each other's stats. Oh well.... I guess I'm
      going to have to go back to real tables.

      GTTs don't seem to be fully baked yet, at least in 9i.
      Can anyone confirm that GTTs work better in 10g?

      Jim Garrison
      jhg@athensgroup .com

      Comment

      • Jonathan Lewis

        #4
        Re: GLOBAL TEMPORARY table - serious join performance problem


        Read about dynamic sampling.

        Use a level where all tables without
        statistics are sampled at run time, and
        your problem will go away.


        --
        Regards

        Jonathan Lewis


        The Co-operative Oracle Users' FAQ


        Optimising Oracle Seminar - schedule updated Sept 19th





        "Jim Garrison" <jhg@athensgrou p.comwrote in message
        news:GfednS5GOq _UaszcRVn-iA@giganews.com ...
        Scenario:
        >
        1) Create a GLOBAL TEMPORARY table and populate it with
        one (1) row.
        >
        2) Join that table to another with about 1 million rows.
        The join condition selects a few hundred rows.
        >
        Performance: 4 seconds, the system is doing a full-table
        scan of the second table, and the Explain Plan output
        indicates that Oracle thinks the first table has 4000
        rows.
        >
        Now replace the GLOBAL TEMPORARY table with a real table
        and repeat exactly the same query. This runs in 94 milliseconds
        and the Explain Plan shows the correct number of rows for
        the driving table and an index scan on the second table, as
        I would have expected.
        >
        Can anyone suggest a solution that will make the GLOBAL TEMPORARY
        implementation as fast as the real table version?
        >
        BTW, why are there two sets of parallel groups under both
        comp.database.o racle and comp.databases. oracle?
        >
        Jim Garrison
        jhg@athensgroup .com

        Comment

        • Jim Garrison

          #5
          Re: GLOBAL TEMPORARY table - serious join performance problem

          Jonathan Lewis wrote:
          Read about dynamic sampling.
          >
          Use a level where all tables without
          statistics are sampled at run time, and
          your problem will go away.
          That does the trick.

          I added /*+ dynamic_samplin g(gtt 1) */ and the query
          works FASTER than a real table (about 15% faster)
          and the execution plan is logical once again.

          One note that should be in the docs is that if the
          table has an alias in the statement then only the
          alias name can be used in the hint. The full table
          name is ignored in that case.

          Thanks for your assistance.

          Comment

          • Jeff White

            #6
            Re: GLOBAL TEMPORARY table - serious join performance problem

            "Jonathan Lewis" <jonathan@jlcom p.demon.co.ukwr ote in message news:<cit6l4$8l e$1@titan.btint ernet.com>...
            Read about dynamic sampling.
            >
            Use a level where all tables without
            statistics are sampled at run time, and
            your problem will go away.
            >
            >
            --
            Regards
            >
            Jonathan Lewis
            >

            The Co-operative Oracle Users' FAQ
            >

            Optimising Oracle Seminar - schedule updated Sept 19th
            >
            >
            >
            >
            >
            "Jim Garrison" <jhg@athensgrou p.comwrote in message
            news:GfednS5GOq _UaszcRVn-iA@giganews.com ...
            Scenario:

            1) Create a GLOBAL TEMPORARY table and populate it with
            one (1) row.

            2) Join that table to another with about 1 million rows.
            The join condition selects a few hundred rows.

            Performance: 4 seconds, the system is doing a full-table
            scan of the second table, and the Explain Plan output
            indicates that Oracle thinks the first table has 4000
            rows.

            Now replace the GLOBAL TEMPORARY table with a real table
            and repeat exactly the same query. This runs in 94 milliseconds
            and the Explain Plan shows the correct number of rows for
            the driving table and an index scan on the second table, as
            I would have expected.

            Can anyone suggest a solution that will make the GLOBAL TEMPORARY
            implementation as fast as the real table version?

            BTW, why are there two sets of parallel groups under both
            comp.database.o racle and comp.databases. oracle?

            Jim Garrison
            jhg@athensgroup .com
            I may be way off base here...

            Did you collect stats on the 'real table'? If so, then the CBO knows
            there is only 1 row which would give you the 'ideal plan' using the
            index scan on table 2.

            With the temporay table, assuming you have stats on the second table
            and not on the temporary table, then Oracle is using the CBO. The
            optimizer needs to guess as to how many rows are in the temporary
            table. The only information it has to go on is the table definition
            (row size) and your block size. The optimizer may assume worst case
            and come up with some number of possible rows in the table.


            SQLcreate global temporary table foobar(col1 number) on commit
            preserve rows;
            SQLinsert into foobar values(1);
            SQLcommit;

            SQLcreate table t1(col1 number);
            SQLinsert into t1 select object_id from user_objects;
            3 rows created
            SQLset autotrace on
            SQLselect * from t1
            2 , foobar
            3 where foobar.col1 = t1.col1
            4 /

            no rows selected


            Execution Plan
            ----------------------------------------------------------
            0 SELECT STATEMENT Optimizer=CHOOS E (Cost=14 Card=8168 Bytes=1
            47024)

            1 0 HASH JOIN (Cost=14 Card=8168 Bytes=147024)
            2 1 TABLE ACCESS (FULL) OF 'T1' (Cost=2 Card=3 Bytes=15)
            3 1 TABLE ACCESS (FULL) OF 'FOOBAR' (Cost=11 Card=8168
            Bytes=106184)

            Note the cardinality for temp table foobar. My block size is 8k.

            Jeff W.
            enrikonewsgroup s@earthlink.net

            Comment

            Working...