VBA Import & Parse: File Sharing Lock Count Exceeded - Access 97 Bug?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • MalFarrelle
    New Member
    • Apr 2012
    • 14

    VBA Import & Parse: File Sharing Lock Count Exceeded - Access 97 Bug?

    It's a very long time since I used Access, I'm more of a MS Project / Excel with VBA fish.

    I am developing a VBA app under Access 97 and running into problems when importing data and creating large data tables. I receive the error: "File sharing lock count exceeded (Error 3052)."


    If I remove my encapsulating BeginTrans... CommitTrans all works well, but painfully slowly. I had presumed that when commiting a transaction all locks would be removed . . this seems not to be happening, is it me? [In an attempt to overcome the error I include in my import loop a If recordcount > (some threshold) then CommitTrans, BeginTrans, recordcount = 0 hook . . . but this does not help regardless of how low I set (some threshold).]

    I also increased my lock limit using DAO.DBEngine.Se tOption dbmaxlocksperfi le (to increasingly large limits), all to no avail.

    The table I am creating/to which I am appending comprises 11 fields, 8 of which are indexed. By design it is not normalised and provides a parsed representation of a non-delimited text file. I am trying to append approximately 500,000 records. Any clues please? If in fact the dbmaxlocksperfi le setting was taking (seems to be write-only so i can't check?) is there a means of determining how many locks I need? Can this setting be increased on the fly?)

    Many questions I know . . . I shall keep my fingers crossed.

    TIA,
    Mal
  • NeoPa
    Recognized Expert Moderator MVP
    • Oct 2006
    • 32653

    #2
    Too many questions. I will make one fundamental suggestion. From there, if more is required then a specific question would be required to cover it.

    My suggestion is to import the data in as basic a way as possible. No indices. No properties that could ever cause a record to fail.

    From there, you can do your error checking with your own reporting and then build up whatever indices you need in single steps. I expect many of the problems (time; resources; etc.) are related to doing all the extra work within the basic import.

    Comment

    • MalFarrelle
      New Member
      • Apr 2012
      • 14

      #3
      Originally posted by NeoPa
      Too many questions. I will make one fundamental suggestion. From there, if more is required then a specific question would be required to cover it.

      My suggestion is to import the data in as basic a way as possible. No indices. No properties that could ever cause a record to fail.

      From there, you can do your error checking with your own reporting and then build up whatever indices you need in single steps. I expect many of the problems (time; resources; etc.) are related to doing all the extra work within the basic import.
      Thanks for the reply.

      Importing to a non-indexed table then using SQL to append imported results to the indexed destination table is ultra-slow, and seems unnecessary (if my assumption that CommitTrans should release all file locks is true . . . is it?).

      Comment

      • NeoPa
        Recognized Expert Moderator MVP
        • Oct 2006
        • 32653

        #4
        CommitTrans simply ensures the table is updated with any transaction updates. Locks for the related records may be released at this time, but locking systems vary (even within Access options) so not all locks are necessarily released.

        Furthermore, what I suggested was not quite what you seemed to infer. I was suggesting that the indices get applied to the table the data is in, rather than transferring the data to a table where the indices are already prepared.

        Originally posted by MalFarrelle
        MalFarrelle:
        Importing to a non-indexed table then using SQL to append imported results to the indexed destination table is ultra-slow
        Interesting. I wouldn't have expected this. I didn't suggest it, but nevertheless would not have expected a great deterioration of performance compared to your original approach. What timings do you have? How much slower was it?

        Comment

        • MalFarrelle
          New Member
          • Apr 2012
          • 14

          #5
          Thank you again, It was a wild asumption that CommitTrans should release the locks. I didn't mean to infer that I would transfer the data to a table where the indices are already prepared, I meant simply to append the records to a table that is indexed (so that Access will create new indecis for new records).

          I don't have timings recorded now, I shall rebuild my VBA using the technique you suggest and compare with what I have now (which issues an .Addnew ... .Update transaction for *every* new record to overcome the issue of seemingly having no direct control over the release of locks.

          I still foresee problems. In the event that the source file will spawn many more records than available locks and having no obvious means of forcing a release of locks . . . what is the prefered technique . . . trap the Error 3052 and just repeat until the error goes away?

          Comment

          • NeoPa
            Recognized Expert Moderator MVP
            • Oct 2006
            • 32653

            #6
            The preferred approach would surely be to avoid the situation entirely. I would hope my suggestion, followed through accurately, would give you that.

            There still seems some confusion over the approach I'm suggesting though. I hope it clarifies if I identify #2 of the possible approaches outlined below :
              • Delete all data from the table used to import new data.
              • Import new data into table (which has no indices defined).
              • Potentially do any validity checks (and reporting) on the data at this stage.
              • Copy data from this table to a table which does have indices defined.
              • Delete all data from the table used to import new data.
              • Delete any indices from table used to import data into.
              • Import new data into table (which currently has no indices defined).
              • Potentially do any validity checks (and reporting) on the data at this stage.
              • Add indices to the table with the imported data in. No copying/moving of data required at this point.


            I would expect the speed improvement to come from the importing of data being able to continue at full speed without the interruption, for every record, of the finding and building of index values. I would expect doing this in a single phase at point of creating the index would take less time.

            An alternative, of course, is to avoid the use of transactions at this stage. Determine if the import step succeeds after it has returned. If yes then continue. If no then reset all effected data and start over (or even allow an abort if appropriate).

            Comment

            • MalFarrelle
              New Member
              • Apr 2012
              • 14

              #7
              Thanks again, I appreciate the advice.

              Suggestion #1 above is ecactly what I infered from your earlier reply. My issue remains . . . if I am importing data to create thirteen squizillian records I want to bunch them up into chunky transactions . . . . but there seems no way to do this while deterministical ly avoiding the lock limit. does Access really offer no way of itteratively creating a set of n records in a transaction when the lock limit is known to be n + some records?

              I would not delete records from the file used to import (it takes 5 to 10 minutes to do the record delete) . . . prefering to drop the table and recreate it anew.

              Avoiding transactions works reliably and is what I am doing now, but gives me a 6 hour process time to import a typical data set.

              Comment

              • NeoPa
                Recognized Expert Moderator MVP
                • Oct 2006
                • 32653

                #8
                Originally posted by MalFarrelle
                MalFarrelle:
                Suggestion #1 above is exactly what I inferred from your earlier reply.
                Correct use of 'Infer'. I like that :-)

                You did notice though, that I indicated that #1 was most definitely not my suggestion? I was suggesting #2.

                Originally posted by MalFarrelle
                MalFarrelle:
                I would not delete records from the file used to import (it takes 5 to 10 minutes to do the record delete) . . . preferring to drop the table and recreate it anew.
                I won't argue with that. I usually find that deleting records takes minimal time, but I doubt I deal with as many records as you do and a DROP/CREATE is about as clean as you can get anyway.

                Originally posted by MalFarrelle
                MalFarrelle:
                Avoiding transactions works reliably and is what I am doing now, but gives me a 6 hour process time to import a typical data set.
                Are you saying it takes longer than with transactions? Otherwise you know that generally using transactions slows down the process somewhat, as it introduces more work into the process. If it takes 6 hours without transactions then I would say that is simply a reflection of the amount of data you have. It's the best you can hope for.

                Comment

                • MalFarrelle
                  New Member
                  • Apr 2012
                  • 14

                  #9
                  That is precisely what I am saying. It is much quicker (reference follows) to save a batch of records in a single transaction than to write single-record transactions to the database . . . leastways it is under a process such as mine when one simply reads sequential data from a text file and saves it to a data table. Hence my original question,. how can I avoid the file lock issue, ie take control of the release of locks.

                  Quoting Microsoft: (http://support.microsoft.com/kb/146908) "You can speed up database operations in a Microsoft Access database by using transactions. A transaction starts with a BeginTrans statement and ends with a CommitTrans or Rollback statement. However, for reasons relating to stack usage it is recommended that these code regions are kept as small as possible. This ensures optimal performance and reliability." For more information as to when to use transactions, please see the following article in the Microsoft Knowledge Base:

                  145757 (http://support.microsoft.com/kb/145757/EN-US/ ) : Ideas to Consider When Using Transactions

                  The sample program below is over 17 times faster when using BeginTrans/CommitTrans. Performance may vary on different computers. "

                  Comment

                  • NeoPa
                    Recognized Expert Moderator MVP
                    • Oct 2006
                    • 32653

                    #10
                    Originally posted by MalFarrelle
                    MalFarrelle:
                    That is precisely what I am saying. It is much quicker (reference follows) to save a batch of records in a single transaction than to write single-record transactions to the database
                    Although the rest of it seems to explain that using transactions is actually faster, your quoted sentence makes that ambiguous by comparing large transactions with smaller ones, which is not the point of the question. The question was about comparing transactional updates with non-transactional ones. This leaves me somewhat confused, but I'll look into it.

                    Having now read the linked articles I now see that this is an apples v pears situation. It explains that transactions speed things up when updating using VBA (IE. Using Recordset.Edit( ) and Recordset.Updat e() VBA statements). That's perfectly possible, but we were discussing SQL processes (I thought), and these are typically many times faster than recordset processing in VBA. If that is not what you're doing, and you're actually importing data from somewhere and placing it carefully in another table using .Update in VBA, then my advice would certainly be to cease and desist ASAP. SQL should be used where possible if performance is any one of your priorities.
                    Last edited by NeoPa; Apr 25 '12, 12:31 PM. Reason: Typo

                    Comment

                    • MalFarrelle
                      New Member
                      • Apr 2012
                      • 14

                      #11
                      By single-record transaction I mean a single .Addnew . . . Update transaction (implied transaction, no BeginTrans . . . CommitTrans bracketing).

                      The source data exists in a text file. It is not delimited data, it is not fixed width data and the actual data content is not prescribed. SQL in and of itself won't help me here, I have developed (in VBA for Access) a parser which reads the source file line by line and populates an Access data table with unnormalised data tables which correlate with the source data file. (Subsequently, and not subject of this thread I normalise the data and produce further higher-order data tables).

                      So, No I can't use SQL alone. Yes I am using VBA to import and parse text files. Yes I want to batch the records together in large transaction to take advantage of the well-documented time savings associated with so doing.

                      My question remains, is it possible to deterministical ly avoid the File lock limits which I encounter when importing extremely large data tables? Anyone?

                      Comment

                      • NeoPa
                        Recognized Expert Moderator MVP
                        • Oct 2006
                        • 32653

                        #12
                        Originally posted by MalFarrelle
                        MalFarrelle:
                        By single-record transaction I mean a single .Addnew . . . Update transaction (implied transaction, no BeginTrans . . . CommitTrans bracketing).
                        Ah. Much clearer without that unfortunate use of the terminology. I follow now.

                        I can see now (with the fuller explanation) that your situation is far from straightforward . I don't know how complex is the parsing of your data. There may be scope for importing as a job lot (file import to table), but there again, it's very possible there is not.

                        The only thing I can think of, which is pretty non-intuitive, is to try closing and reopening the recordset between transactions. That generally adds to the overhead of table processing, but the added cost may be outweighed by the benefit of using the larger transactions. It may be worth trying, just to see.

                        Frankly, like yourself, I'm surprised that the recordset isn't reset to its earlier condition (relating to any locks) after a CommitTrans.

                        PS. If your code isn't too long (over 100 lines) you may like to post it for perusal. It's not impossible for there to be a bug in there somewhere that you've overlooked. More eyes may catch it. Just a thought.

                        Comment

                        • MalFarrelle
                          New Member
                          • Apr 2012
                          • 14

                          #13
                          Closing and opening the recordset might just do it, I shall try. Unfortunately I can't post the code without significantly cleansing it . . what I will do is try to recreate the problem as simply as possible then post any resultant trigger.

                          Thanks again for the help, I wouldn't have come up with the close/open recordset idea and it does sound promising.

                          Comment

                          • NeoPa
                            Recognized Expert Moderator MVP
                            • Oct 2006
                            • 32653

                            #14
                            Give it a try, yes. That's the easiest way to check it.

                            As for the code, I can understand sometimes it can't be made public, and that's fine. I wouldn't bother cleansing it. That's more work than warranted. It was only an idea on the off-chance. Cleansing also bears the risk of removing any offending part too anyway, which is not a reflection on your abilities, but rather the nature of such problems being hard to notice.

                            Comment

                            • MalFarrelle
                              New Member
                              • Apr 2012
                              • 14

                              #15
                              I returned to this issue, it remains unsolved. Even after performing a recordset close/reopen when an arbitary threshold is set (even of just 100 records) the lock limit is always exceeded after a cumulative import of about 400,000 records. Here is the code I used, which was being accessed and executed at the apporopriate times:
                              Code:
                              importct = importct + 1
                              If importct > 100 Then
                                  CommitTrans
                                  AddedRecordset.Close
                                  Set AddedRecordset = CurrentDb.OpenRecordset("Events")
                                  importct = 0
                                  BeginTrans
                              End If
                              . . . Does anybody have any understanding of how to manage record locks under Access 97?

                              Thanks in Advance,

                              Mal
                              Last edited by NeoPa; May 8 '12, 02:19 PM. Reason: Added mandatory [CODE] tags.

                              Comment

                              Working...