ofstream limitations?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • mikejfe
    New Member
    • Feb 2007
    • 12

    ofstream limitations?

    I wrote a program to read a file, store that information into a 3d array, sort that array, and then write to a file. Should be straightforward . The problem I am running into is within the writing part.

    I've taken the sorting function out.

    For debugging purposes- when the file is written, a counter tells me that all pieces of information have been written, however, by visual inspection the data is jumbled and missing quite a few pieces of data. I have also tried displaying the data, supposedly being written, to the screen. I can't see anything funky going on. This is partly because I am trying to work with files on the order of 512-1000MB.

    If I load a small file, ~10MB, no problems. I just have problems with larger file sizes.

    So, here's my question (sorry for my long windedness): Are there limitations to the size of file that can be written using ofstream?

    I can post my code if that would be helpful. I know this should be so simple. I am pretty inexperienced with C++.

    Thanks in advance!
  • Banfa
    Recognized Expert Expert
    • Feb 2006
    • 9067

    #2
    Originally posted by mikejfe
    So, here's my question (sorry for my long windedness): Are there limitations to the size of file that can be written using ofstream?
    It should only be limited by the underlying filing system. You haven't said what you are using but the most common filing systems today (NTFS and whatever Linux uses) should be OK with that size of file.

    Comment

    • RedSon
      Recognized Expert Expert
      • Jan 2007
      • 4980

      #3
      FYI

      NTFS:
      File size limit = 16 TiB with current implementation

      ext3: (common Linux filesystem)
      File size limit = 16GiB – 2TiB

      And for those who do not know what at TiB is:

      A tebibyte (a contraction of tera binary byte) is a unit of information or computer storage, abbreviated TiB.

      1 tebibyte = 2^40 bytes = 1,099,511,627,7 76 bytes = 1,024 gibibytes

      Comment

      • RedSon
        Recognized Expert Expert
        • Jan 2007
        • 4980

        #4
        And for anyone who is interested:

        Comment

        • mikejfe
          New Member
          • Feb 2007
          • 12

          #5
          Thanks for your replies.

          I apologize for not putting this in my original post. I'm using XP, Dev-C++, and set up as NTFS. I know this is such a beginners question..ugh.

          Please don't laugh at me but I figured out that when I analyzed the data, my analysis was incorrect. My files are being saved "correctly. "

          However, I do have a related side question: are there any theories why when I save the file, the file size drops from 128 MB to 32 MB? The data is the same (eg 65,536x256 pieces of information) and the files both stay .asc. It was this drastic change in file size that made me think I had an error in the first place. :S

          Comment

          • Banfa
            Recognized Expert Expert
            • Feb 2006
            • 9067

            #6
            Originally posted by mikejfe
            However, I do have a related side question: are there any theories why when I save the file, the file size drops from 128 MB to 32 MB? The data is the same (eg 65,536x256 pieces of information) and the files both stay .asc. It was this drastic change in file size that made me think I had an error in the first place. :S
            1. You are not saving all the original data

            or

            2. The original file contained a lot of redundant data that is not required

            65,536x256 = 16M implying your pieces of information are each 2 bytes long to make a 32M file. Is this correct?

            Comment

            • RedSon
              Recognized Expert Expert
              • Jan 2007
              • 4980

              #7
              Also some NTFS disks are setup to optimize their sectors so they compress the data.

              Comment

              • mikejfe
                New Member
                • Feb 2007
                • 12

                #8
                Banfa, you must be correct about the redundant data. Apparently each piece of information in the original file is 8 bytes long. What kind of redundant data could be in the file that would change the information from 8 bytes to 2 bytes? In my array I store all the information as floats before writing to the file.

                Thanks for your responses. This is definitely a "learning thread" for me.

                Comment

                • Banfa
                  Recognized Expert Expert
                  • Feb 2006
                  • 9067

                  #9
                  Originally posted by mikejfe
                  Banfa, you must be correct about the redundant data. Apparently each piece of information in the original file is 8 bytes long. What kind of redundant data could be in the file that would change the information from 8 bytes to 2 bytes? In my array I store all the information as floats before writing to the file.

                  Thanks for your responses. This is definitely a "learning thread" for me.
                  OK well

                  Is the original file text while the one you are writing binary?

                  Is the original data stored as binary doubles, but you are saving a floats (??? see comment below)?

                  If your data items are floats and therefore 4 bytes long your file is too short, it should be 64M = 65536 * 256 * 4.


                  You probably have the information to answer this question (where as we don't) because you know what you are reading, and what you are writing.

                  Comment

                  Working...