How we improve query execution of a bulky database..

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • coolbuddy
    New Member
    • Feb 2010
    • 5

    How we improve query execution of a bulky database..

    Hi Friends,

    I am working in Postgresql 8.3. In that I have one table A, this table contains number of attributes and we already put indexing on Id, liveId attributes. But this table also contains the insertedDate and one more and very important things is that in every seconds approx 30 hits occurred in this table.

    When we are fetching the records from this table Its takes to much time. Yesterday I had put indexing on this column and after indexing till 4-5 hours it working awesome, but now when the table size is increase it was taking too much time even more than earlier. Therefore I had drop that index but I want to increase the fetching speed.

    If I will put indexing again then which way I will put it. I am new to Database handling so please help me..

    So Please gave me guidance.. Thanks in Advance..
    Waiting for your early response...
  • rski
    Recognized Expert Contributor
    • Dec 2006
    • 700

    #2
    What query do you use to fetch data from table?

    Comment

    • RedSon
      Recognized Expert Expert
      • Jan 2007
      • 4980

      #3
      Does postgresql have a query analysis tool?

      Comment

      • coolbuddy
        New Member
        • Feb 2010
        • 5

        #4
        Hi Rski,
        Thanks for reply.

        I am using subquery, the query is:-

        select * from tableA where liveid in (select id from tableB where vehicleid in (select id from tableC where registrationNum ber='MH4R2423') ) and inserteddate>'2 010-02-20' and inserteddate<'2 010-02-21'.

        When I did indexing on inserteddate till 4-5 hours it work quite good. But when the table size is increase its takes too much time therefore I drop that index.

        Dear RedSon, I am new to Postgres
        and I only know that if we want to check the qery performance then we are using Explain Analyze "Users Query"

        I am not a Database Guy I am a Developer So I can't know too much about this. Still I am trying to fix this issue, but if you know anything then Plz help me.
        Heartily Appreciate for your help

        Comment

        • rski
          Recognized Expert Contributor
          • Dec 2006
          • 700

          #5
          Can you show the explain plan for that query.

          Comment

          • coolbuddy
            New Member
            • Feb 2010
            • 5

            #6
            The Explain Plan of my query is as follows:-

            Explain analyze Select * from tableA where liveid in (select id from tableB where vehicleid in (select id from tableC where registrationNum ber like '%0306')) and insertedDate>'2 010-02-25' and insertedDate<'2 010-02-26';
            QUERY PLAN
            ---------------------------------------------------------------------------------------------------------
            Hash IN Join (cost=37054.99. .279253.69 rows=119 width=140) (actual time=13374.289. .90126.653 rows=965 loops=1)
            Hash Cond: (tableA.liveid = tableB.id)
            -> Bitmap Heap Scan on tableA (cost=1998.45.. 243537.42 rows=87806 width=140) (actual time=11841.439. .87984.638 rows=255173 loops=1)
            Recheck Cond: ((insertedDate > '2010-02-25 00:00:00+05:30' ::timestamp with time zone) AND (insertedDate < '2010-02-26 00:00:00+05:30' ::timestamp with time zone))
            -> Bitmap Index Scan on tableA_inserted Date (cost=0.00..197 6.50 rows=87806 width=0) (actual time=11825.504. .11825.504 rows=255180 loops=1)
            Index Cond: ((insertedDate > '2010-02-25 00:00:00+05:30' ::timestamp with time zone) AND (insertedDate < '2010-02-26 00:00:00+05:30' ::timestamp with time zone))
            -> Hash (cost=35009.64. .35009.64 rows=3752 width=8) (actual time=1.070..1.0 70 rows=1 loops=1)
            -> Nested Loop (cost=190.88..3 5009.64 rows=3752 width=8) (actual time=1.054..1.0 63 rows=1 loops=1)
            -> HashAggregate (cost=27.23..27 .24 rows=1 width=8) (actual time=0.269..0.2 71 rows=1 loops=1)
            -> Seq Scan on tableC (cost=0.00..27. 23 rows=1 width=8) (actual time=0.019..0.2 53 rows=1 loops=1)
            Filter: (registrationNu mber ~~ '%0360'::text)
            -> Bitmap Heap Scan on tableB (cost=163.65..3 4809.35 rows=13844 width=16) (actual time=0.771..0.7 73 rows=1 loops=1)
            Recheck Cond: (tableB.vehicle id = tableC.id)
            -> Bitmap Index Scan on livestats_drive r_vehicle_index (cost=0.00..160 .19 rows=13844 width=0) (actual time=0.147..0.1 47 rows=386 loops=1)
            Index Cond: (tableB.vehicle id = tableC.id)
            Total runtime: 90143.018 ms
            (16 rows)

            Comment

            • rski
              Recognized Expert Contributor
              • Dec 2006
              • 700

              #7
              Did you try to replce 'IN' clauses with simple join?

              Comment

              • coolbuddy
                New Member
                • Feb 2010
                • 5

                #8
                Joins takes more time... I already used that but that is also not useful...

                Comment

                • rski
                  Recognized Expert Contributor
                  • Dec 2006
                  • 700

                  #9
                  OK, let's think
                  Don't you think that you can replace
                  Code:
                  inserteddate>'2010-02-20' and inserteddate<'2010-02-21'
                  with
                  Code:
                  to_char(inserteddate,'YYYY-MM-DD')='2010-02-20'
                  After that would be good to replace index yo have on inserteddate with function based index on to_char(inserte ddate,'YYYY-MM-DD')

                  Comment

                  • coolbuddy
                    New Member
                    • Feb 2010
                    • 5

                    #10
                    Thanx for reply...
                    Buddy I also did same things but in different way.
                    I added one column in a table as a datatype varchar, and then I had insert the value of "insertedda te column without Time" in new column.
                    And then I put indexing on that column and did some modification on the java code also. But when I saw the performance in my local box they are almost same not too much difference...
                    But Still I am searching to do this task....

                    Comment

                    • rski
                      Recognized Expert Contributor
                      • Dec 2006
                      • 700

                      #11
                      Can you describe the relations between tables. I mean can you describe the ERD diagram for that database (or show tables definitions, do they contain any foreign keys, primary keys,.. ?)

                      Comment

                      Working...