How can I avoid table lock in SQL2000?

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • gdekhayser

    How can I avoid table lock in SQL2000?

    To any and all;

    I have a very large table (16Mil+ records), for which I want to delete
    about 8 Million records. There are a few indexes on this table.

    Is there a way that I can run either a query or a series of queries
    that will run against each record and delete based on criteria (date)?
    If I do a single DELETE query, it will take forever, lock the table,
    and my app that runs against it will stop INSERTING, which is bad.

    If I do a cursor, I think it locks the table also, so that won't do,
    right?

    Any help would be appreciated.

    Glenn Dekhayser
    Contentcatcher. com

  • Steve Jorgensen

    #2
    Re: How can I avoid table lock in SQL2000?

    On 15 Feb 2005 13:19:01 -0800, "gdekhayser " <gdekhayser@voy antinc.com> wrote:
    [color=blue]
    >To any and all;
    >
    >I have a very large table (16Mil+ records), for which I want to delete
    >about 8 Million records. There are a few indexes on this table.
    >
    >Is there a way that I can run either a query or a series of queries
    >that will run against each record and delete based on criteria (date)?
    > If I do a single DELETE query, it will take forever, lock the table,
    >and my app that runs against it will stop INSERTING, which is bad.
    >
    >If I do a cursor, I think it locks the table also, so that won't do,
    >right?
    >
    >Any help would be appreciated.
    >
    >Glenn Dekhayser
    >Contentcatcher .com[/color]

    With a table that size, have you thought of partitioning it by date? Then,
    locks would be held only on the partition(s) that overlap the date range you
    are deleting.

    Comment

    • gdekhayser

      #3
      Re: How can I avoid table lock in SQL2000?

      Right, but I need to deal with the records in place before I can do
      that....don't I? Once I get the table down in size to something
      rational, then I can partition it....

      Comment

      • Erland Sommarskog

        #4
        Re: How can I avoid table lock in SQL2000?

        gdekhayser (gdekhayser@voy antinc.com) writes:[color=blue]
        > I have a very large table (16Mil+ records), for which I want to delete
        > about 8 Million records. There are a few indexes on this table.
        >
        > Is there a way that I can run either a query or a series of queries
        > that will run against each record and delete based on criteria (date)?
        > If I do a single DELETE query, it will take forever, lock the table,
        > and my app that runs against it will stop INSERTING, which is bad.
        >
        > If I do a cursor, I think it locks the table also, so that won't do,
        > right?[/color]

        First of all, if there is no index on this date column, you can never
        avoid the table lock.

        If there is a clustered index on the date column, you should be able
        to say DELETE tbl "WHERE date < @somedateinthep ast" and still have
        your insertes coming through. Then again, if you delete day by day,
        it may still be leaner.

        If there is a non-clustered index on date, it depends on how many
        rows there are per date. If there are too many rows per date, a
        DELETE per date could table-scan. You could force an index, but
        plenty of rows would be locked, for some time.


        --
        Erland Sommarskog, SQL Server MVP, esquel@sommarsk og.se

        Books Online for SQL Server SP3 at
        SQL Server 2025 redefines what's possible for enterprise data. With developer-first features and integration with analytics and AI models, SQL Server 2025 accelerates AI innovation using the data you already have.

        Comment

        • Tzvika Barenholz

          #5
          Re: How can I avoid table lock in SQL2000?

          For the future, you should consider partitioning , as deleting n
          records is o(n) while dropping a table (or truncating it) is O(1).

          For now , I would recommend deleting with rowcount< some number ,
          probably in the low thousands, which should leave the table in a fairly
          operational state while you do it.

          Of course after such a massive delete, the table will be in a terribly
          fragmented state internally, so the next step is to rebuild the
          indexes.

          another alternative is to create another table with the same schema but
          without the indexes, insert into it the rows that you do want to keep
          (the complement of the 'to be deleted') and then build the indexes. you
          can then drop the old table and rename the new one.

          hope this helps
          Tzvika


          gdekhayser wrote:[color=blue]
          > To any and all;
          >
          > I have a very large table (16Mil+ records), for which I want to[/color]
          delete[color=blue]
          > about 8 Million records. There are a few indexes on this table.
          >
          > Is there a way that I can run either a query or a series of queries
          > that will run against each record and delete based on criteria[/color]
          (date)?[color=blue]
          > If I do a single DELETE query, it will take forever, lock the table,
          > and my app that runs against it will stop INSERTING, which is bad.
          >
          > If I do a cursor, I think it locks the table also, so that won't do,
          > right?
          >
          > Any help would be appreciated.
          >
          > Glenn Dekhayser
          > Contentcatcher. com[/color]

          Comment

          • gdekhayser

            #6
            Re: How can I avoid table lock in SQL2000?

            Erland- I wish I would have read your post earlier. I would have
            thought that removing the indexes would make the delete faster. I
            goofed and left the 4 indexes in place, the date one wasn't clustered.
            I really wouldn't have cared about the row locks if I forced an index,
            as they were going to be deleted anyway and not accessed.

            I ended up just performing the delete- and watching my logs grow REAL
            large.

            I now have 3 log files where I really only want one- is there a way to
            condense back into one log file when you're split into 3?

            Comment

            • Erland Sommarskog

              #7
              Re: How can I avoid table lock in SQL2000?

              gdekhayser (gdekhayser@voy antinc.com) writes:[color=blue]
              > Erland- I wish I would have read your post earlier. I would have
              > thought that removing the indexes would make the delete faster.[/color]

              Yeah, removing the indexes not useful for the DELETE would have speedied
              things up a little. It may not be a good idea to drop the clustered index
              though.
              [color=blue]
              > I ended up just performing the delete- and watching my logs grow REAL
              > large.
              >
              > I now have 3 log files where I really only want one- is there a way to
              > condense back into one log file when you're split into 3?[/color]

              Well, you can say ALTER DATABASE REMOVE FILE, but you would have to
              truncate the log first. Maybe the best is to use WITH TRUNCATE_ONLY
              and the take a full backup of the database. I guess that then you
              should be able to remove some of the files.

              But I will have to admit that I have never had reason to drop a log
              file, so I am just speculating here.


              --
              Erland Sommarskog, SQL Server MVP, esquel@sommarsk og.se

              Books Online for SQL Server SP3 at
              SQL Server 2025 redefines what's possible for enterprise data. With developer-first features and integration with analytics and AI models, SQL Server 2025 accelerates AI innovation using the data you already have.

              Comment

              Working...