writing code for an append to destroy table

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Michael Kirkby
    New Member
    • Aug 2014
    • 2

    writing code for an append to destroy table

    Hi,
    I know I have to create the Append Table first and then the Append query which I know how to do in Access 2010.

    I have several thousands of records that are in the process of being destroyed. Their status will change form A to D and I would prefer to keep them in a separate table.

    What is the easiest way to write the necessary code so that these destroyed records automatically transfer over to the new table? Do I also need to use a filter? I never took a course in Access so I mostly learn as I go along. Any assistance you could give me will be appreciated.

    Thanks,
    Michael Kirkby
  • twinnyfo
    Recognized Expert Moderator Specialist
    • Nov 2011
    • 3662

    #2
    Michael,

    Please provide a little more information on your question. Typically, we don't "destroy" tables in Access. We can delete records, and it is also possible to delete an entire Table, but I'm not exactly sure what it is that you want to do.

    Do you simply want to identify a certain set of records (defined above as previously being "A", but now they are "D"), and then copy those records to a different table, and then delete those records from their original table?

    Also, how are you planning to do this? Have you designed a form that will make these updates? What have you tried so far?

    I know you say you are a novice, and that's not a problem--we've all been there. However, some additional information would help us help you better.

    Comment

    • Michael Kirkby
      New Member
      • Aug 2014
      • 2

      #3
      Each record or entry has a Status field. When we destroy the actual file or box in storage then the record's status on the database changes from A for active to D destroyed.
      What I want to do is to create a table for all files marked as D to be automatically appended to this Destroyed table.
      I've written a string for queries regarding file type and location but I'm just not sure how to write this one. Any suggestions?

      Comment

      • twinnyfo
        Recognized Expert Moderator Specialist
        • Nov 2011
        • 3662

        #4
        A simple way to do this (if you want to maintain the same field structure as the original table is to first, make a copy of the Table that has the original records. (If the table is a linked table, you must go to the Back end DB.) Simply select the Table, Ctrl-C, Ctrl-V. Access will ask if you want the Straucturea nd Data or Structure Only. Choose Structure Only.

        Then, create a query that selects all the fields in these records, in which your only criteria should be "[FileStatus] = 'D'". In the Query Design Tab, select "Append Query". It will ask which Table to append to and select the new table you just made.

        Save that query (it shoudl work that simply).

        To get rid of those records from the original Table, create another query with the same criteria, but make it a "Delete Query". These two queries, when run sequentially should copy all the records with File Status "D" into the new archive table and then delete those records.

        Hope this helps.

        Comment

        • Rabbit
          Recognized Expert MVP
          • Jan 2007
          • 12517

          #5
          You shouldn't have 2 tables to separate these records. You can leave them in one table and use the status to filter as appropriate.

          Comment

          • twinnyfo
            Recognized Expert Moderator Specialist
            • Nov 2011
            • 3662

            #6
            Rabbit,

            True that--however, I have a bunch of records that I must retain, but very seldom use. I can either save them in an archive table (based on the status of the record) or keep them in the same table and filter as needed.

            However, wouldn't this significantly slow down the performance of the DB as the table grew much larger over time? This keep smy table down to around 3-4,000 records, instead of 25-30,000 records (and growing).

            Is there a way to do as you suggest without hindering performance?

            Comment

            • Rabbit
              Recognized Expert MVP
              • Jan 2007
              • 12517

              #7
              An index helps a lot. But I usually use SQL Server as a back end.

              Comment

              • zmbd
                Recognized Expert Moderator Expert
                • Mar 2012
                • 5501

                #8
                Just to see how long this would take
                I pulled a linked table from our SQL back end into access.
                20 fields of various datatypes
                6 of the fields are FK
                45000 records.
                Initial pull from backend to local took 3 seconds.
                The Initial unfiltered query, using wizard for simple select query, returning all fields took less than 2 seconds.
                Closing the database and reopening (I didn't want any residual mumbo-jumbo) Pulled up the simple select and filtered on one of the fields... took under 2 seconds.
                Ran one of my more complex queries on this local data (usually ran over the link to the SQL-Sever)
                Over the network this query takes 5 seconds to run; however, I'm also using the entire SQL backend not just a subset, took about 3 seconds to run on the 45000

                So, not a rigorous testing; however, in a properly normalized database, things don't seem to run too badly.
                Last edited by zmbd; Aug 26 '14, 12:08 PM. Reason: [z{fixed fumble fingers and grammer}]

                Comment

                • NeoPa
                  Recognized Expert Moderator MVP
                  • Oct 2006
                  • 32656

                  #9
                  @Twinnyfo.

                  You should be very careful before you ever diverge from normalisation concepts on the basis of performance. As a general rule the impact is much less than expected when properly designed. As a reliable concept you will introduce complications into your system that normalisation and proper database design are designed to protect you from.

                  Much of my work ATM is related to fixing a system where the idea of archiving little-used data was allowed to take precedence. To be fair, the designer knew no better at that point in time. It's very slow and painful work when there are much better enhancements cried out for.

                  Comment

                  • twinnyfo
                    Recognized Expert Moderator Specialist
                    • Nov 2011
                    • 3662

                    #10
                    @ NeoPa, Z and Rabbit,

                    Some good things for me to think about. Just as a thought, is it worth "reimportin g" the old data? I know this might take a while to get it right--but also, I am concerned about proper design. That's why I am on this site, right?

                    ;-)

                    Comment

                    • NeoPa
                      Recognized Expert Moderator MVP
                      • Oct 2006
                      • 32656

                      #11
                      First make sure you have the design of your live table right - which includes the indices you'll need - then yes. Transfer the archived data back to the main table.

                      Also, ensure all objects (Queries, forms, reports etc) know that if they want live data only they need to filter on the [Status} (or whatever) field.

                      Comment

                      • twinnyfo
                        Recognized Expert Moderator Specialist
                        • Nov 2011
                        • 3662

                        #12
                        @NeoPa - I will start a new thread on this....

                        Comment

                        Working...