Data transfer problem - ideas/solutions wanted (please)

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • E.T. Grey

    Data transfer problem - ideas/solutions wanted (please)

    Hi,

    I have an interesting problem. I have a (LARGE) set of historical data
    that I want to keep on a central server, as several separate files. I
    want a client process to be able to request the data in a specific file
    by specifying the file name, start date/time and end date/time.

    The files are in binary format, to conserve space on the server (as well
    as to increase processing time). The data in each file can be quite
    large, covering several years of data. New data will be appended to
    these files each day, by a (PHP) script. The server machine is likely to
    be a Unix machine, whereas my clients will be running on windows
    machines. My clients program is written in C++.

    My two main problems/questions are as follows:

    1). Transfer method issue:
    What is the best (i.e. most efficient and fast way) to transfer data
    from the server to clients ?. I think SOAP is likely to be too slow,
    because of the sheer size of the data

    2). Cross platform issue:
    How can I insure that that the (binary?) data sent from the Unix server
    can be correctly interpreted at the client side?

    2). Security issue:
    How can I prevent clients from directly accessing the files (to prevent
    malicious or accidental corruption of the data files.?

  • Jerry Stuckle

    #2
    Re: Data transfer problem - ideas/solutions wanted (please)

    E.T. Grey wrote:[color=blue]
    > Hi,
    >
    > I have an interesting problem. I have a (LARGE) set of historical data
    > that I want to keep on a central server, as several separate files. I
    > want a client process to be able to request the data in a specific file
    > by specifying the file name, start date/time and end date/time.
    >
    > The files are in binary format, to conserve space on the server (as well
    > as to increase processing time). The data in each file can be quite
    > large, covering several years of data. New data will be appended to
    > these files each day, by a (PHP) script. The server machine is likely to
    > be a Unix machine, whereas my clients will be running on windows
    > machines. My clients program is written in C++.
    >
    > My two main problems/questions are as follows:
    >
    > 1). Transfer method issue:
    > What is the best (i.e. most efficient and fast way) to transfer data
    > from the server to clients ?. I think SOAP is likely to be too slow,
    > because of the sheer size of the data
    >
    > 2). Cross platform issue:
    > How can I insure that that the (binary?) data sent from the Unix server
    > can be correctly interpreted at the client side?
    >
    > 2). Security issue:
    > How can I prevent clients from directly accessing the files (to prevent
    > malicious or accidental corruption of the data files.?
    >[/color]

    Since this is going to be on a Unix machine, might I suggest one of the
    Unix/Linux groups? Other than the fact this is going to be appended to
    by a PHP script (which isn't part of your question), I don't see
    anything indicating PHP in involved, much less a PHP question.

    --
    =============== ===
    Remove the "x" from my email address
    Jerry Stuckle
    JDS Computer Training Corp.
    jstucklex@attgl obal.net
    =============== ===

    Comment

    • E.T. Grey

      #3
      Re: Data transfer problem - ideas/solutions wanted (please)



      Jerry Stuckle wrote:
      [color=blue]
      > E.T. Grey wrote:
      >[color=green]
      >> Hi,
      >>
      >> I have an interesting problem. I have a (LARGE) set of historical data
      >> that I want to keep on a central server, as several separate files. I
      >> want a client process to be able to request the data in a specific
      >> file by specifying the file name, start date/time and end date/time.
      >>
      >> The files are in binary format, to conserve space on the server (as
      >> well as to increase processing time). The data in each file can be
      >> quite large, covering several years of data. New data will be appended
      >> to these files each day, by a (PHP) script. The server machine is
      >> likely to be a Unix machine, whereas my clients will be running on
      >> windows machines. My clients program is written in C++.
      >>
      >> My two main problems/questions are as follows:
      >>
      >> 1). Transfer method issue:
      >> What is the best (i.e. most efficient and fast way) to transfer data
      >> from the server to clients ?. I think SOAP is likely to be too slow,
      >> because of the sheer size of the data
      >>
      >> 2). Cross platform issue:
      >> How can I insure that that the (binary?) data sent from the Unix
      >> server can be correctly interpreted at the client side?
      >>
      >> 2). Security issue:
      >> How can I prevent clients from directly accessing the files (to
      >> prevent malicious or accidental corruption of the data files.?
      >>[/color]
      >
      > Since this is going to be on a Unix machine, might I suggest one of the
      > Unix/Linux groups? Other than the fact this is going to be appended to
      > by a PHP script (which isn't part of your question), I don't see
      > anything indicating PHP in involved, much less a PHP question.
      >[/color]

      With the benefit of hindsight, I did not make myself clear. A further
      clarification is thus in order:

      I have implemented the server side of the solution using PHP. I am
      communicating with the C++ frontend using SOAP (i.e. communicating
      between PHP on the server and C++ at the client).


      My first question is asked because I (assume) SOAP is too heavy for file
      transfer (ok not directly a PHP question)

      My second question was asked because the files will be created (and
      appended) using PHP scripts - and I was wondering if binary files
      written by PHP on Unix, may be able to be read by a C++ application
      running on Windows.

      My third question was asked because I'm a realitive WAMP/LAMP & PHP
      newbie and I do not fully understand security issues in this framework.
      I simply know that I want to prevent clients from directly accesing the
      files.

      Hope this helps clarify things

      Comment

      • NC

        #4
        Re: Data transfer problem - ideas/solutions wanted (please)

        E.T. Grey wrote:[color=blue]
        >
        > I have a (LARGE) set of historical data that I want to keep
        > on a central server, as several separate files.[/color]

        How large exactly?
        [color=blue]
        > I want a client process to be able to request the data in a
        > specific file by specifying the file name, start date/time and
        > end date/time.[/color]

        The start/end date/time bit actually is a rather fat hint that you
        should consider using a database... Searching through large files will
        eat up enormous amounts of disk and processor time.
        [color=blue]
        > New data will be appended to these files each day, by a
        > (PHP) script.[/color]

        Yet another reason to consider a database...
        [color=blue]
        > What is the best (i.e. most efficient and fast way) to transfer data
        > from the server to clients ?.[/color]

        Assuming you are using HTTP, compressed (gzip) CSV will probably be the
        fastest.
        [color=blue]
        > How can I insure that that the (binary?) data sent from the Unix server
        > can be correctly interpreted at the client side?[/color]

        Why should the data be binary? Compressed CSV is likely to be at least
        as compact as binary data, plus CSV will be human-readable, which
        should help during debugging.
        [color=blue]
        > How can I prevent clients from directly accessing the files
        > (to prevent malicious or accidental corruption of the data files.?[/color]

        Import them into a database and lock the originals in a safe place.

        Cheers,
        NC

        Comment

        • Jerry Stuckle

          #5
          Re: Data transfer problem - ideas/solutions wanted (please)

          E.T. Grey wrote:[color=blue]
          >
          >
          > Jerry Stuckle wrote:
          >[color=green]
          >> E.T. Grey wrote:
          >>[color=darkred]
          >>> Hi,
          >>>
          >>> I have an interesting problem. I have a (LARGE) set of historical
          >>> data that I want to keep on a central server, as several separate
          >>> files. I want a client process to be able to request the data in a
          >>> specific file by specifying the file name, start date/time and end
          >>> date/time.
          >>>
          >>> The files are in binary format, to conserve space on the server (as
          >>> well as to increase processing time). The data in each file can be
          >>> quite large, covering several years of data. New data will be
          >>> appended to these files each day, by a (PHP) script. The server
          >>> machine is likely to be a Unix machine, whereas my clients will be
          >>> running on windows machines. My clients program is written in C++.
          >>>
          >>> My two main problems/questions are as follows:
          >>>
          >>> 1). Transfer method issue:
          >>> What is the best (i.e. most efficient and fast way) to transfer data
          >>> from the server to clients ?. I think SOAP is likely to be too slow,
          >>> because of the sheer size of the data
          >>>
          >>> 2). Cross platform issue:
          >>> How can I insure that that the (binary?) data sent from the Unix
          >>> server can be correctly interpreted at the client side?
          >>>
          >>> 2). Security issue:
          >>> How can I prevent clients from directly accessing the files (to
          >>> prevent malicious or accidental corruption of the data files.?
          >>>[/color]
          >>
          >> Since this is going to be on a Unix machine, might I suggest one of
          >> the Unix/Linux groups? Other than the fact this is going to be
          >> appended to by a PHP script (which isn't part of your question), I
          >> don't see anything indicating PHP in involved, much less a PHP question.
          >>[/color]
          >
          > With the benefit of hindsight, I did not make myself clear. A further
          > clarification is thus in order:
          >
          > I have implemented the server side of the solution using PHP. I am
          > communicating with the C++ frontend using SOAP (i.e. communicating
          > between PHP on the server and C++ at the client).
          >
          >
          > My first question is asked because I (assume) SOAP is too heavy for file
          > transfer (ok not directly a PHP question)
          >
          > My second question was asked because the files will be created (and
          > appended) using PHP scripts - and I was wondering if binary files
          > written by PHP on Unix, may be able to be read by a C++ application
          > running on Windows.
          >
          > My third question was asked because I'm a realitive WAMP/LAMP & PHP
          > newbie and I do not fully understand security issues in this framework.
          > I simply know that I want to prevent clients from directly accesing the
          > files.
          >
          > Hope this helps clarify things
          >[/color]

          Well, as for reading and writing the files - C++ should be able to read
          any file written by PHP, COBOL or any other language. You may be forced
          to do some massaging of the bytes, but that should be all.

          As for not accessing the files directly - just don't put them in your
          web root directory (or anywhere below it). Then someone can't access it
          through the website.

          --
          =============== ===
          Remove the "x" from my email address
          Jerry Stuckle
          JDS Computer Training Corp.
          jstucklex@attgl obal.net
          =============== ===

          Comment

          • E.T. Grey

            #6
            Re: Data transfer problem - ideas/solutions wanted (please)



            NC wrote:
            [color=blue]
            > E.T. Grey wrote:
            >[color=green]
            >>I have a (LARGE) set of historical data that I want to keep
            >>on a central server, as several separate files.[/color]
            >
            >
            > How large exactly?[/color]

            At last count, there are about 65,000 distinct files (and increasing)
            [color=blue]
            >
            >[color=green]
            >>I want a client process to be able to request the data in a
            >>specific file by specifying the file name, start date/time and
            >>end date/time.[/color]
            >
            >
            > The start/end date/time bit actually is a rather fat hint that you
            > should consider using a database... Searching through large files will
            > eat up enormous amounts of disk and processor time.
            >[/color]

            Not necessarily true. Each file has the equivalent of approx 1M rows
            (yes - thats 1 million) - yet the binary files (which use compression
            algos) are approx 10k-15K in size. If you multiply the number of rows
            (on avg) by the number of files - you can quickly see why using a db as
            a repository would be a poor design choice.
            [color=blue]
            >[color=green]
            >>New data will be appended to these files each day, by a
            >>(PHP) script.[/color]
            >
            >
            > Yet another reason to consider a database...
            >
            >[/color]
            See above
            [color=blue][color=green]
            >>What is the best (i.e. most efficient and fast way) to transfer data
            >>from the server to clients ?.[/color]
            >
            >
            > Assuming you are using HTTP, compressed (gzip) CSV will probably be the
            > fastest.
            >
            >[/color]
            This involves converting the read data to a string first, before
            (possibly) zipping it and sending it. This incurrs overhead (that I
            would like to avoid) on both server and client.
            [color=blue][color=green]
            >>How can I insure that that the (binary?) data sent from the Unix server
            >>can be correctly interpreted at the client side?[/color]
            >
            >
            > Why should the data be binary? Compressed CSV is likely to be at least
            > as compact as binary data, plus CSV will be human-readable, which
            > should help during debugging.
            >
            >[/color]

            See above
            [color=blue][color=green]
            >>How can I prevent clients from directly accessing the files
            >>(to prevent malicious or accidental corruption of the data files.?[/color]
            >
            >
            > Import them into a database and lock the originals in a safe place.
            >
            > Cheers,
            > NC
            >[/color]

            Comment

            • NC

              #7
              Re: Data transfer problem - ideas/solutions wanted (please)

              E.T. Grey wrote:[color=blue]
              >[color=green][color=darkred]
              > > >I have a (LARGE) set of historical data that I want to keep
              > > >on a central server, as several separate files.[/color]
              > >
              > >
              > > How large exactly?[/color]
              >
              > At last count, there are about 65,000 distinct files (and increasing)[/color]
              ....[color=blue]
              > Each file has the equivalent of approx 1M rows (yes - thats 1 million)[/color]
              ....[color=blue]
              > If you multiply the number of rows (on avg) by the number of files -
              > you can quickly see why using a db as a repository would be a
              > poor design choice.[/color]

              Sorry, I can't. 65 million records is a manageable database.
              [color=blue]
              > This involves converting the read data to a string first, before
              > (possibly) zipping it and sending it. This incurrs overhead (that I
              > would like to avoid) on both server and client.[/color]

              And yet you are willing to convert EVERY BIT of that data when you
              search through it...

              Cheers,
              NC

              Comment

              • noone

                #8
                Re: Data transfer problem - ideas/solutions wanted (please)

                NC wrote:[color=blue]
                > E.T. Grey wrote:
                >[color=green][color=darkred]
                >>>>I have a (LARGE) set of historical data that I want to keep
                >>>>on a central server, as several separate files.
                >>>
                >>>
                >>>How large exactly?[/color]
                >>
                >>At last count, there are about 65,000 distinct files (and increasing)[/color]
                >
                > ...
                >[color=green]
                >>Each file has the equivalent of approx 1M rows (yes - thats 1 million)[/color]
                >
                > ...
                >[color=green]
                >>If you multiply the number of rows (on avg) by the number of files -
                >>you can quickly see why using a db as a repository would be a
                >>poor design choice.[/color]
                >
                >
                > Sorry, I can't. 65 million records is a manageable database.[/color]

                I agree... I have designed and deployed binary and ascii data loads in
                excess of 250Million records/day. Searching the data was a piece of
                cake - if you know how to actually designed the database correctly.

                65M records is peanuts to a database - even MySql. With proper indexing
                you can do a direct-row lookup in < 4-8 I/O's - not so with the path
                you are currently trying to traverse... you are looking at up to 65M
                reads - and reads are very expensive!!

                Use the proper tools/mechanisms for the job at hand...


                Michael Austin
                DBA
                (stuff snipped)

                Comment

                • E.T. Grey

                  #9
                  Re: Data transfer problem - ideas/solutions wanted (please)



                  NC wrote:[color=blue]
                  > E.T. Grey wrote:
                  >[color=green][color=darkred]
                  >>>>I have a (LARGE) set of historical data that I want to keep
                  >>>>on a central server, as several separate files.
                  >>>
                  >>>
                  >>>How large exactly?[/color]
                  >>
                  >>At last count, there are about 65,000 distinct files (and increasing)[/color]
                  >
                  > ...
                  >[color=green]
                  >>Each file has the equivalent of approx 1M rows (yes - thats 1 million)[/color]
                  >
                  > ...
                  >[color=green]
                  >>If you multiply the number of rows (on avg) by the number of files -
                  >>you can quickly see why using a db as a repository would be a
                  >>poor design choice.[/color]
                  >
                  >
                  > Sorry, I can't. 65 million records is a manageable database.
                  >
                  >[/color]

                  Its amazing how some people once having set their mind on one thing,
                  wont change it - even when presented with the facts. Last time I
                  checked, 65,000 x 1 million = 65 billion - not 65 million. Ah well - you
                  can't win them all.

                  Comment

                  • David Haynes

                    #10
                    Re: Data transfer problem - ideas/solutions wanted (please)

                    E.T. Grey wrote:[color=blue]
                    >
                    >
                    > NC wrote:[color=green]
                    >> E.T. Grey wrote:
                    >>[color=darkred]
                    >>>>> I have a (LARGE) set of historical data that I want to keep
                    >>>>> on a central server, as several separate files.
                    >>>>
                    >>>>
                    >>>> How large exactly?
                    >>>
                    >>> At last count, there are about 65,000 distinct files (and increasing)[/color]
                    >>
                    >> ...
                    >>[color=darkred]
                    >>> Each file has the equivalent of approx 1M rows (yes - thats 1 million)[/color]
                    >>
                    >> ...
                    >>[color=darkred]
                    >>> If you multiply the number of rows (on avg) by the number of files -
                    >>> you can quickly see why using a db as a repository would be a
                    >>> poor design choice.[/color]
                    >>
                    >>
                    >> Sorry, I can't. 65 million records is a manageable database.
                    >>
                    >>[/color]
                    >
                    > Its amazing how some people once having set their mind on one thing,
                    > wont change it - even when presented with the facts. Last time I
                    > checked, 65,000 x 1 million = 65 billion - not 65 million. Ah well - you
                    > can't win them all.
                    >[/color]
                    Well, my question is "Do all 65 billion records need to be active at all
                    times?" If not, doing some roll-up/archival strategy may reduce this to
                    a usable size.

                    -david-

                    Comment

                    • E.T. Grey

                      #11
                      Re: Data transfer problem - ideas/solutions wanted (please)



                      noone wrote:
                      [color=blue]
                      > NC wrote:
                      >[color=green]
                      >> E.T. Grey wrote:
                      >>[color=darkred]
                      >>>>> I have a (LARGE) set of historical data that I want to keep
                      >>>>> on a central server, as several separate files.
                      >>>>
                      >>>>
                      >>>>
                      >>>> How large exactly?
                      >>>
                      >>>
                      >>> At last count, there are about 65,000 distinct files (and increasing)[/color]
                      >>
                      >>
                      >> ...
                      >>[color=darkred]
                      >>> Each file has the equivalent of approx 1M rows (yes - thats 1 million)[/color]
                      >>
                      >>
                      >> ...
                      >>[color=darkred]
                      >>> If you multiply the number of rows (on avg) by the number of files -
                      >>> you can quickly see why using a db as a repository would be a
                      >>> poor design choice.[/color]
                      >>
                      >>
                      >>
                      >> Sorry, I can't. 65 million records is a manageable database.[/color]
                      >
                      >
                      > I agree... I have designed and deployed binary and ascii data loads in
                      > excess of 250Million records/day. Searching the data was a piece of
                      > cake - if you know how to actually designed the database correctly.
                      >
                      > 65M records is peanuts to a database - even MySql. With proper indexing
                      > you can do a direct-row lookup in < 4-8 I/O's - not so with the path
                      > you are currently trying to traverse... you are looking at up to 65M
                      > reads - and reads are very expensive!!
                      >
                      > Use the proper tools/mechanisms for the job at hand...
                      >
                      > Michael Austin
                      > DBA
                      > (stuff snipped)
                      >[/color]

                      Please do not patronise me. Like NC, you completely overlooked the
                      obvious fact that the number of records we are talking about (if a
                      database design is used) runs into billions - not millions. Furthermore,
                      the datasets are time series data and therefore order is of paramount
                      importance. Instead of trying to impose a design on me (without fully
                      understanding the problem), it would have been infinetely preferable if
                      you had simply answered the question I had asked in the first place. But
                      judging by the way you have overlooked basic facts - whilst being hell
                      bent that a db solution is *definitely* the way forward - you have
                      instantly lost any credibility you may have had - and consequently, I
                      will ignore any "advice" you care to offer in the future.

                      Comment

                      • NC

                        #12
                        Re: Data transfer problem - ideas/solutions wanted (please)

                        E.T. Grey wrote:[color=blue]
                        > NC wrote:[color=green]
                        > > E.T. Grey wrote:
                        > >[color=darkred]
                        > > > At last count, there are about 65,000 distinct files (and increasing)[/color]
                        > > ...[color=darkred]
                        > > > Each file has the equivalent of approx 1M rows (yes - thats 1 million)[/color]
                        > > ...[color=darkred]
                        > > > If you multiply the number of rows (on avg) by the number of files -
                        > > > you can quickly see why using a db as a repository would be a
                        > > > poor design choice.[/color]
                        > >
                        > > Sorry, I can't. 65 million records is a manageable database.[/color]
                        >
                        > Its amazing how some people once having set their mind on one thing,
                        > wont change it - even when presented with the facts. Last time I
                        > checked, 65,000 x 1 million = 65 billion - not 65 million.[/color]

                        OK, I obviously made a stupid typo; I'll gladly correct it:

                        65 billion records is a manageable database

                        Even MySQL (which is often thought of as a departmental rather than
                        enterprise system, although with MySQL 5.0 available this may be
                        reconsidered) is capable of maintaining large databases. Since MySQL
                        3.23, you can store up to 65536 terabytes using the MyISAM storage
                        engine (which effectively means that the size of your table is limited
                        only by your operating system's file size limit) or mere 64 TB using
                        the InnoDB storage engine (but in this case, the file size limit does
                        not apply, because an InnoDB table can be spread over several files).
                        You stated earlier that a compressed set of one million records takes
                        10-15 kilobytes to store, so an uncompressed record would probably be
                        just a few bytes long. This is a load that a single server with a
                        properly configured RAID could handle...

                        Cheers,
                        NC

                        Comment

                        Working...