fwrite() efficiency/alternative

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Ectara
    New Member
    • Nov 2009
    • 24

    fwrite() efficiency/alternative

    Hello, all.

    I became aware of issues with fwrite() a couple years ago, as it threw monkey wrenches in my data management, and I quickly wrote a function to work around this and replace it. But recently, I have decided that my current method is a serious bottleneck, and I was wondering if using fwrite() to write an array of single bytes would be fatal to portability. My current integer writing function, seen below, I've revised and optimized several times and squeezed as much as I can out of it, performance-wise. The datatypes are self explanitory and defined elsewhere; (signedness)(ty pe)(bitcount). So ui32 is an unsigned 32 bit integer, etc.

    Code:
    ui32 Ectaraio::write( const void * ptr, ui32 size, ui32 n){
    	ui32 numWritten=0;
    	si8 * p = (si8*)ptr;
    	if(size == 1){
    		for(ui32 i = n; i--;){
    			putc(*(p++),fp);
    			numWritten++;
    		}
    	}else{
    		if(!endVar){
    			for(ui32 varIndex=n;varIndex--;){
    				p+=size;
    				for(ui32 byteIndex=size;byteIndex--;){
    					putc(*(--p),fp);
    					numWritten++;
    					if(!verbose)continue;
    					sf32 percent = (sf32)numWritten/(size*n)*100;
    					if(((si32)percent%5==0))printf("%.2f%%\n",percent);
    				}
    				p+=size;
    			}
    		}else{
    			for(ui32 varIndex=n;varIndex--;){
    				for(ui32 byteIndex=size;byteIndex--;){
    					putc(*(p++),fp);
    					numWritten++;
    					if(!verbose)continue;
    					sf32 percent = (sf32)numWritten/(size*n)*100;
    					if(((si32)percent%5==0))printf("%.2f%%\n",percent);
    				}
    			}
    		}
    	}
    return numWritten;
    }
    Currently, I am having the issue where n*size putc() calls is a lot slower than one fwrite() per size bytes, until a buffer fills and writes out. What I am concerned with, is breaking portability. I noticed years back that so much as recompiling the program somehow made my files unusable, even though I never write structures as a whole or anything of the sort. By the way, endVar is a variable containing the endianness state; 0 for LSB and 1 for MSB. Determined through a quick routine in the constructor for my IO class, part of my library. I digress. Main question, is using fwrite() to write blocks of single bytes unhealthy, if the same file were to be used on many platforms and such? Or is my current function the best I am going to get for my purposes?

    - Ectara
  • newb16
    Contributor
    • Jul 2008
    • 687

    #2
    fwrite must be ok if you pack and unpack your structures into byte array properly (taking care of endianness).

    Comment

    • donbock
      Recognized Expert Top Contributor
      • Mar 2008
      • 2427

      #3
      The strategy taken in your code snippet is to write to the file a byte at a time; using the specified endianness to control the order you pluck bytes from the input array.

      Another strategy would be to construct a copy of the input array, transforming the copy in accordance with the specified endianness. Then you can write the copy to the file in one chunk.

      Personally, I've found it less confusing to use text files to communicate information between systems of potentially different endianness. The disadvantages are obvious: time taken to translate between binary and text; increased size of the file. The advantage is also obvious: processor and compiler changes don't render the data file unuseable.

      There are more things than endianness to complicate your life when communicating between systems: two's-complement representation of integers is ubiquitous but not guaranteed to be universal; representation of floating point numbers is notoriously fickle; alignment rules can vary, changing the number of pad bytes between structure fields; order of bits in a bit field can vary; etc.

      Comment

      • Ectara
        New Member
        • Nov 2009
        • 24

        #4
        Also, I neglected to mention that the entire library is designed to operate in big endian, as seen in the function above that when it writes out for little endian, it writes each set of size bytes in reverse, while if big endian, it writes the bytes in order. I suppose I could do something like using a small buffer of size length to swap the bytes around and issue one fwrite() call per set. I was just curious if fwrite() really is as fickle as it was in my experience, where a mere change in something cosmetic will magically make the file unreadable after writing it. I guess I'll just keep testing and checking.

        Comment

        • donbock
          Recognized Expert Top Contributor
          • Mar 2008
          • 2427

          #5
          Originally posted by Ectara
          I was just curious if fwrite() really is as fickle as it was in my experience, where a mere change in something cosmetic will magically make the file unreadable after writing it.
          I'm not aware of any fickleness in the fwrite() function. What sort of cosmetic changes have caused these problems for you?

          Comment

          • Tassos Souris
            New Member
            • Aug 2008
            • 152

            #6
            I totally agree with donbock. Go for the text side. Besides, many very used standards (like XML) depend on text for communication between applications.

            Besides, most probably, your implementation of writing a number in a file in its binary format is not as efficient as the services that the OS might provide.
            For example, writing each byte to a file individually is not very efficient (remember that the file might be set to unbuffered mode by the client). Also, those two many if's and for's and ... uh.. Branch misses? Those really heart performance.
            If you desperately need to write the number in its binary format into the file do:
            1) Convert the number into an array of bytes with parallelism using bitwise operators. Do not use conditions.
            2) Write the array with fwrite(). When you fread() that chuck of bytes from the file you will have the original array of bytes you produced.

            Comment

            • Ectara
              New Member
              • Nov 2009
              • 24

              #7
              Originally posted by donbock
              I'm not aware of any fickleness in the fwrite() function. What sort of cosmetic changes have caused these problems for you?
              Something silly like recompiling the executable to fix a spelling error in a string literal, unrelated to the function or the file I/O, such as a welcome message that is printed directly without manipulation. All of a sudden, reading in what was once valid data, provides partially correct data, but some stack allocated variables come up empty or bizarre numbers. Boggles my mind how it is possible when I'm looking at a valid hexdump, and how it is read hasn't changed. fwrite() seems to be working good so far in these current trials, though.

              Originally posted by Tassos Souris
              <snip>
              Besides, most probably, your implementation of writing a number in a file in its binary format is not as efficient as the services that the OS might provide.
              For example, writing each byte to a file individually is not very efficient (remember that the file might be set to unbuffered mode by the client). Also, those two many if's and for's and ... uh.. Branch misses? Those really heart performance.
              If you desperately need to write the number in its binary format into the file do:
              1) Convert the number into an array of bytes with parallelism using bitwise operators. Do not use conditions.
              2) Write the array with fwrite(). When you fread() that chuck of bytes from the file you will have the original array of bytes you produced.
              I had done quite a bit of research on file buffering, and figured I was making several calls anyway, so whether it wrote as soon as possible or when the buffer filled would make little impact on the performance hit caused by the function overhead. Also, what branch misses? Would the if/else structure not catch all possibilities for those two variables? My brother did suggest using preprocessor directives instead of an if/else, but I prefer the if/else for the sole purpose of spoofing my endianness at will to write or read in a different endianness. It has its uses, despite the minor performance hit for each call to the function. Also, what would you suggest to replace the for loops? I have tried some new things(trusting fwrite() for now, and seeing how it holds up):

              Code:
              ui32 Ectaraio::write( const void * ptr, ui32 size, ui32 n){
              	ui32 numWritten=0;
              	si8 * p = (si8*)ptr;
              	if(size == 1)numWritten = fwrite(p,sizeof(si8),n,fp);
              	else{
              		si8 buffer[size];
              		if(!endVar){
              			for(ui32 varIndex=n;varIndex--;){
              				for(ui32 byteIndex=size;byteIndex--;)buffer[byteIndex] = *(p++);
              				numWritten+=fwrite(buffer,size,1,fp);
              			}
              		}else numWritten+=fwrite(p,size,n,fp);
              	}
                      return numWritten;
              }
              Also, thank you for your input everyone. The above function shaved 5 seconds off of writing out a 8.9mb dynamically allocated array of single bytes.

              Comment

              • donbock
                Recognized Expert Top Contributor
                • Mar 2008
                • 2427

                #8
                Originally posted by Ectara
                Something silly like recompiling the executable to fix a spelling error in a string literal, unrelated to the function or the file I/O, such as a welcome message that is printed directly without manipulation. All of a sudden, reading in what was once valid data, provides partially correct data, but some stack allocated variables come up empty or bizarre numbers. Boggles my mind how it is possible when I'm looking at a valid hexdump, and how it is read hasn't changed. fwrite() seems to be working good so far in these current trials, though.
                I might be wrong, but I'm sure these problems were not caused by fwrite() itself. It is much more likely that there was a change in some implementation-defined attribute of the C language, thereby causing an unexpected change in how the byte array was coded/decoded.

                Comment

                • Ectara
                  New Member
                  • Nov 2009
                  • 24

                  #9
                  Hm. Well, it is working for now, and many other people use it with few complaints, so I guess I can make replacements for my old byte-by-byte I/O. Is there a more efficient way to buffer and swap the order of the bytes than what I did?

                  Comment

                  • Banfa
                    Recognized Expert Expert
                    • Feb 2006
                    • 9067

                    #10
                    Originally posted by Ectara
                    Something silly like recompiling the executable to fix a spelling error in a string literal, unrelated to the function or the file I/O, such as a welcome message that is printed directly without manipulation. All of a sudden, reading in what was once valid data, provides partially correct data, but some stack allocated variables come up empty or bizarre numbers. Boggles my mind how it is possible when I'm looking at a valid hexdump, and how it is read hasn't changed. fwrite() seems to be working good so far in these current trials, though.
                    Sounds like undefined behaviour to me. I think I would be reaching for a copy of bounds checker right about now.

                    Comment

                    Working...