Using Python/CGI to stream large tar files on the fly??

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • falloutphil
    New Member
    • May 2007
    • 4

    Using Python/CGI to stream large tar files on the fly??

    Hi,

    I'm running a CGI script written in python that tars up the photos in user selected directories (via a form). It's running on Apache 1.3.31.1 on Solaris 5.8.

    It works well for a small amount of files, but when I try to use it for large archives (typically over 10MB, but this seems to vary) my download stops short. Occasionally I get an error message, this is usually a Broken Pipe (IO Error 32), but sometimes I don't see anything at all (but this is perhaps a flushing issue with stderr on my webspace, perhaps it's because the Broken Pipe is a red herring).

    My understanding is that to get a broken pipe the client could close its download request prematurely (definately not the case), or Apache/Python is closing stdout on its side? I get no timeout error in the browser - it just stops dead, thinking the download is complete - the tar is always corrupted and well short of the size I'd expect.

    I would have thought that the stdout pipe to the user would have stayed open as long data was being streamed to it? Is it possible that my webspace provider has set some time limit? Could some sort of buffer under/over-run be occurring on the stdout stream?

    I've tried various other methods of solving the problem - I've ruled out using zips as I cannot create the objects in memory (using CStringIO or similar) before streaming them as a one complete string due to runtime memory limitations on my webspace (only 30MB as far as I can see!). The only other thing I can think of is to write a temporary file to disk and allow the user to download this - again, not practical as diskspace is limited, and is a bit ugly too.

    The function causing the problem is below, it's pretty self explantory. If anyone has any ideas what might be causing the problem I'd be very greatful - it's driving me round the bend!

    Thanks,

    Phil.


    def download( folders ):

    print "Content-type: application/x-tar"
    print "Content-Disposition: attachment; filename=\"Down load.tar\"\n"

    parentZipFile = tarfile.open( '', "w", sys.stdout )

    #signal.signal( signal.SIGPIPE, signal.SIG_DFL )

    for folder in folders:

    photoDir = os.path.join( folder, "photos" )
    if os.path.isdir( photoDir ):

    # We have photos!
    photos = glob.glob( photoDir + "/*.jpg" )
    photos += glob.glob( photoDir + "/*.JPG" )
    for photo in photos:

    parentZipFile.a dd( photo, string.join( photo.split( "/" )[-3:], "/" ) )

    parentZipFile.c lose()
  • falloutphil
    New Member
    • May 2007
    • 4

    #2
    To clarify this is definately not a *byte* limit on streaming.

    From my nice fast connection at work, I have no problem downloading 54MB for example, but if I crank it up to a larger 150-odd MB tar it falls over in exactly the same way.

    With larger archives I can see as it tries to extract that the initial decompression goes fine - and from what I can see the end of the file is hit unexpectadly.

    This does look like a timeout on the Apache server.

    I was wondering if anyone could clarify, and perhaps suggest a workaround?

    Thanks again,

    Phil.

    Comment

    • bartonc
      Recognized Expert Expert
      • Sep 2006
      • 6478

      #3
      Can't help with CGI scripts. Can help with posting code:
      We use [code] tags that will maintain the indentation of your code.
      Great work-around on your part, using nested[indent] tags, though.
      It's all right there, on the right hand side of the page when posting or replying: 4 little things to keep in mind in * GUIDELINES...

      Comment

      Working...