Copying Data from Access to SQL Server

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • TC

    #16
    Re: Copying Data from Access to SQL Server

    David,

    The scenario is very real to me, and peformance definitely matters.

    Perhaps you meant to imply that an application which needs to do
    frequent bulk uploads from client to server should not be built with
    Access / SQL Server? If so, I agree with you, but I have no choice in
    the matter -- I'm working with the challenge I've been given. And
    frankly, with the export/transfer/import technique, I think I'm headed
    toward a pretty good solution.


    -TC


    David W. Fenton wrote:
    "TC" <golemdanube@ya hoo.comwrote in
    news:1152033470 .740516.69200@7 5g2000cwc.googl egroups.com:
    >
    Based on my observations so far, I must conclude that Access
    offers no efficient way to append rows into SQL Server. Despite
    the extra steps involved, it is far more efficient to export to a
    file, move the file onto the server, then import into SQL Server.
    This is true even for a local SQL Server.
    >
    Er, why would this kind of performance difference ever matter? That
    is, how often is this going to be done in any real-world scenario?
    I'd say: use the easiest method in terms of implementation and don't
    worry at all about performance.
    >
    --
    David W. Fenton http://www.dfenton.com/
    usenet at dfenton dot com http://www.dfenton.com/DFA/

    Comment

    • david epsom dot com dot au

      #17
      Re: Copying Data from Access to SQL Server

      ODBC often sends one row at a time: it does not always send
      one row at a time.

      The most well known place you see this is reading data: you
      can see when the ODBC connection is using the primary key
      and loading one row at a time (slow), or loading all (fast).

      You can sometimes reproduce this behaviour on subform when
      changing the subform recordsource: change subform recordsource
      (subform loads slow) reset subform recordsource !twice! and
      subform loads fast: ODBC trace shows that slow transfer is
      single records using primary key, fast transfer is bulk transfer.

      I am not aware of any explicit way to control this behaviour.
      Since you say that you have no indexes, presumably you do not
      have a primary key, so it's not that simple.

      (david)

      "TC" <golemdanube@ya hoo.comwrote in message
      news:1152034953 .716552.65590@v 61g2000cwv.goog legroups.com...
      Albert,
      >
      You have added something new to this discussion. Before this, I did not
      know ODBC sends a different SQL statement for each row in an INSERT
      statement. Can you provide more details? Is this a limitation of ODBC
      only, or is it true also of OLEDB? I assume it is a problem only for
      UPDATE, INSERT, and DELETE queries operating on SQL Server tables -- is
      that right, or am I failing to see broader ramifications?
      >
      If the one-statement-per-row phenomenon is true (and it does have the
      ring of truth to it), that suggests ODBC cannot be, and was probably
      never intended to be, an efficient way of transferring bulk data
      between databases. That brings me back to my original question: What is
      an efficient way of transferring bulk data between databases?
      >
      I understand your suggestion about a multi-line pass-through query. I
      doubt this will work in my case, however, since I want a solution
      capable of sending memo/BLOB columns.
      >
      >
      -TC
      >
      >
      Albert D.Kallal wrote:
      >
      1. Export, FTP, and Import -- 280 rows/s
      2. TransferDatabas e to ADP file -- 22 rows/s
      3. INSERT into ODBC-linked table -- 18 rows/s
      4. Procedural Loop through ADO Recordsets -- 6 rows/s
      >
      My question is really a conceptual one. Why is it so much faster to do
      the transfer outside the RDBMS (1) than inside the RDBMS (2,3,4)?
      >>
      >I am not surprised one bit by the above.
      >>
      >#1 - FTP, and using the sql (enterprise manager) tools, it can read the
      >raw
      >data.
      >>
      >#2,#3
      >>
      >How do you think a remote connection to sql works?
      >>
      >Ok, build a sql string. Now send the sql string. The query processor then
      >loads. The query process then parses out the string. The query processor
      >then checks the sql for syntax. The query processor then builds a query
      >plan. The query process then optimizes this query plan. The query then
      >executes the sql AND INSERTS ONE ROW of data!.
      >>
      >Now, send a whole new sql string for ht next record. And, you are
      >surprised
      >this is slower then option one?
      >>
      >I don't think you should be surprised that this is going to be slower at
      >all...
      >>
      >Remember, when you use a odbc (or oleDB) connection, EACH data insert is
      >going to a full line of sql sent to the server. Often, just the sql
      >syntax
      >overhead is GOING TO BE LARGER THEN HE ACTUAL DATA transferred for the
      >ONE
      >record. Remember, EACH record is going to requite a FULL LINE OF sql
      >string
      >to insert the data.
      >>
      >With the ftp, and using the sql tools, the use of a sql statement for
      >EACH
      >LINE of data is by-passed....
      >>
      >>
      >what you need to do is find the syntax in sql for MULTI-RECORD inserts of
      >lines of data.
      >>
      >You then sent about 10, or 20 lines of data IN ONE sql pass-through
      >statement. Doing his would likely speed up the process by a factor of 10,
      >or
      >20 times (in fact, the speed up would be the factor of how many lines of
      >data you include in each sql).
      >>
      >I reasonably sure that sql server supports a insert statement with
      >MULTIPLE
      >lines of data...so, do some testing.
      >>
      >You code that loops would thus have to format he sql statement in
      >code..and
      >send it as a pass-though to sql server...
      >>
      >Once you realize that a odbc, or remote connection can ONLY communicate
      >with
      >sql server by sending FULLY formatted sql statements for EACH record,
      >then
      >there should be little, if any surprise at the performance numbers you
      >posted...
      >>
      >--
      >Albert D. Kallal (Access MVP)
      >Edmonton, Alberta Canada
      >pleaseNOOSpamKa llal@msn.com
      >http://www.members.shaw.ca/AlbertKallal
      >

      Comment

      • Rick Brandt

        #18
        Re: Copying Data from Access to SQL Server

        "TC" <golemdanube@ya hoo.comwrote in message
        news:1152059491 .143900.223310@ 75g2000cwc.goog legroups.com...
        David,
        >
        The scenario is very real to me, and peformance definitely matters.
        >
        Perhaps you meant to imply that an application which needs to do
        frequent bulk uploads from client to server should not be built with
        Access / SQL Server? If so, I agree with you, but I have no choice in
        the matter -- I'm working with the challenge I've been given. And
        frankly, with the export/transfer/import technique, I think I'm headed
        toward a pretty good solution.
        If this is a regular operation have you looked into having a DTS package pull
        the data directly from the Access file instead of pushing it with ODBC? You
        could use a passthrough query to execute the package so that the Access app
        would still be initiating the process.

        --
        Rick Brandt, Microsoft Access MVP
        Email (as appropriate) to...
        RBrandt at Hunter dot com


        Comment

        • TC

          #19
          Re: Copying Data from Access to SQL Server

          Rick,

          Thanks for the suggestion. I have been avoiding any kind of "pull"
          approach because this is essentially a "push" operation. The client
          will initiate the data transfer, and the client could be anywhere. I'm
          not sure how to make DTS find the data for a client which is not at a
          fixed location.

          -TC


          Rick Brandt wrote:
          >
          If this is a regular operation have you looked into having a DTS package pull
          the data directly from the Access file instead of pushing it with ODBC? You
          could use a passthrough query to execute the package so that the Access app
          would still be initiating the process.
          >
          --
          Rick Brandt, Microsoft Access MVP
          Email (as appropriate) to...
          RBrandt at Hunter dot com

          Comment

          • Albert D.Kallal

            #20
            Re: Copying Data from Access to SQL Server

            You have added something new to this discussion. Before this, I did not
            know ODBC sends a different SQL statement for each row in an INSERT
            statement. Can you provide more details? Is this a limitation of ODBC
            only, or is it true also of OLEDB? I assume it is a problem only for
            UPDATE, INSERT, and DELETE queries operating on SQL Server tables -- is
            that right, or am I failing to see broader ramifications?
            Yes, as far as I know, the above is how it works. (and, yes, for oledb
            also). It is not really a limitation, it just how do you talk to sql
            server...and the that means you use sql. You never write raw data, but you
            ALWAYS use sql text.

            When writing records to a JET based mdb file on your local computer, you
            will find performance MUCH faster since you are not using sql strings to
            write out the data. When you throw in the use of the seek command, you can
            achieve MUCH higher data write and search speeds then sql server since you
            have this little jet engine that just SCRAPES data right off of the disk
            drive.

            As a side note, often JET based applications are 2 times faster then sql
            based applications (same hardware, same machine). We are not talking 5%, or
            10%...but 2 times!!

            I would even suggest that you create a new temp mdb file/table, write the
            data to that file, and then use the sql server import tools to import that
            mdb file. That likely would be the fastest import, and the least amount of
            development cost on your part. As for automating this process from start to
            finish, that is a another matter. (I not done this with the DTS tools...but,
            I suspect you can).

            What this means is that you want to use a import system/option of sql
            server, and not a odbc connection....
            If the one-statement-per-row phenomenon is true (and it does have the
            ring of truth to it), that suggests ODBC cannot be, and was probably
            never intended to be, an efficient way of transferring bulk data
            between databases. That brings me back to my original question: What is
            an efficient way of transferring bulk data between databases?
            For sql server to another sql server?

            You simply export to a file backup. (use the bulk copy options). You then
            move that file to other sql server, and then do a bulk import. This type of
            transfer does NOT involve sql.

            It seems to me that the DTS tools also does allow you to import the mdb file
            you just crated (or the one you will create based on what we conclude!!).

            So, for between machines that are both running sql server, you use the bulk
            copy options.

            For odbc, you are much out of luck. And, there does seem to be the
            possibility of downloading some of the sql server library code, and rolling
            your own custom imports....but that sounds like a lot of work (but, they can
            be used....as that is how the DTS and bulk copy operations were written).

            Barring the above, you might try using a transaction.... run 1000
            inserts..and then commit..that might help, but you are venturing into waters
            that I not tested, or tired.

            --
            Albert D. Kallal (Access MVP)
            Edmonton, Alberta Canada
            pleaseNOOSpamKa llal@msn.com



            Comment

            • Terry Kreft

              #21
              Re: Copying Data from Access to SQL Server


              You don't, you push the location to the DTS package.

              If you are using SQL Server2000 or above you can use global variables on the
              command line, if you are using a lower version then you populate a table and
              have the DTS package read the table.

              The way I have set this up before is:-
              DTS package which has a global var
              Stored procedure to be executed by the application (this calls the DTS
              package)

              In Access you then
              Create a passthrough query to execute the stored procedure


              At run time you
              Amend the passthrough query SQl to include the path to the data you want
              to import.
              Execute the passthrough query
              The stored procedure is then called which executes the DTS package,
              passing the path as a global variable.
              The DTS package then runs, the first step in the DTS package is a script
              object which sets the import path according to the global variable.
              The rest of the DTS package then runs importing the data.


              --

              Terry Kreft


              "TC" <golemdanube@ya hoo.comwrote in message
              news:1152117364 .415413.254070@ p79g2000cwp.goo glegroups.com.. .
              Rick,
              >
              <SNIP>I'm
              not sure how to make DTS find the data for a client which is not at a
              fixed location.
              >
              -TC
              >
              >
              Rick Brandt wrote:

              If this is a regular operation have you looked into having a DTS package
              pull
              the data directly from the Access file instead of pushing it with ODBC?
              You
              could use a passthrough query to execute the package so that the Access
              app
              would still be initiating the process.

              --
              Rick Brandt, Microsoft Access MVP
              Email (as appropriate) to...
              RBrandt at Hunter dot com
              >

              Comment

              Working...