Trimming a large log file?

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • veg_all@yahoo.com

    Trimming a large log file?

    I am looking for a simple way to keep my log files from growing too
    large. Basically I would want something that truncates the off first 25
    kb of a 100kb log file. I could do something with reading in the file
    twice, first to determine number of lines. Then a second time to write
    a copy of the reduced log file. Any simpler workarounds?

  • Steve

    #2
    Re: Trimming a large log file?

    On Tue, 28 Nov 2006 20:11:54 -0800, veg_all wrote:
    I am looking for a simple way to keep my log files from growing too
    large. Basically I would want something that truncates the off first 25
    kb of a 100kb log file. I could do something with reading in the file
    twice, first to determine number of lines. Then a second time to write
    a copy of the reduced log file. Any simpler workarounds?
    If you're using a *nix os, then logrotate's probably your best bet.


    Comment

    • petersprc

      #3
      Re: Trimming a large log file?

      Another poster mentioned logrotate which is a very easy way to do it.
      You can also do it with something like log4php
      (http://logging.apache.org/log4php/), if you're willing to use an
      external lib. Here's an example configuration:

      - Create log4php.propert ies in your app's dir. Set the file attribute
      to your log file path.

      log4php.debug=f alse
      log4php.rootLog ger=DEBUG, LOG
      log4php.appende r.LOG=LoggerApp enderRollingFil e
      log4php.appende r.LOG.file=/tmp/my-app
      log4php.appende r.LOG.layout=Lo ggerLayoutTTCC
      log4php.appende r.LOG.maxFileSi ze=10KB
      log4php.appende r.LOG.maxBackup Index=3

      - Initialize log4php in an include or your script:

      define('LOG4PHP _CONFIGURATION' , dirname(__FILE_ _) .
      '/log4php.propert ies');
      require_once(di rname(__FILE__) .
      '/log4php/src/log4php/LoggerManager.p hp');
      register_shutdo wn_function(arr ay('LoggerManag er', 'shutdown'));

      - Log away in your scripts:

      $log =& LoggerManager:: getLogger('MyAp p');
      $log->debug("Debug test.");

      If not using log4php, basically you would check the size of the log
      file before appending to it (use ftell or filesize), and if it exceeds
      your maximum, close the file, move it to a backup or delete it, then
      re-open the file, and proceed to write. The LoggerAppenderR ollingFile
      class in log4php/src/log4php/appenders/LoggerAppenderR ollingFile.php is
      a good example of this.

      Not precisely the rotating buffer that you wanted, but it might do for
      most apps... If you do need a rotating buffer, you could do it in the
      way you described...

      veg_all@yahoo.c om wrote:
      I am looking for a simple way to keep my log files from growing too
      large. Basically I would want something that truncates the off first 25
      kb of a 100kb log file. I could do something with reading in the file
      twice, first to determine number of lines. Then a second time to write
      a copy of the reduced log file. Any simpler workarounds?

      Comment

      • David T. Ashley

        #4
        Re: Trimming a large log file?

        <veg_all@yahoo. comwrote in message
        news:1164773514 .284287.98440@1 6g2000cwy.googl egroups.com...
        >I am looking for a simple way to keep my log files from growing too
        large. Basically I would want something that truncates the off first 25
        kb of a 100kb log file. I could do something with reading in the file
        twice, first to determine number of lines. Then a second time to write
        a copy of the reduced log file. Any simpler workarounds?
        You can also do it using the standard Unix "head", "tail", and optionally
        "wc" commands. (You can use "man" to look those up.)

        For example, if you want to keep the last 20,000 lines of the log:

        tail -n 20000 infile >outfile

        If you're a clever shell programmer, you may find a way to shift the files
        around cleverly so that no appends are lost, but I'm not that clever. You
        need to understand Unix file semantics and all that.

        Rotating the logs (without actually trying to modify or shorten the current
        log file) is actually much simpler (a simple "mv"). Unix file semantics
        guarantee that if a file is open and you rename it, any appends to it will
        get done. The next open-for-append call will create a new file.



        Comment

        • veg_all@yahoo.com

          #5
          Re: Trimming a large log file?

          Thanks for the replies. Here is what I came up with in Perl : Ill
          transfer to php since I need it there as well.

          sub check_log_size {

          my $size = ( -s 'log' ) / 1024 ;
          $size = int $size;

          if ( $size 25 ) {
          open (FILEHANDLE1, "log" ) || &die_sub ( "Cant open log");
          open (FILEHANDLE2, ">log_old" );
          while (<FILEHANDLE1> ) { print FILEHANDLE2 $_; }
          close (FILEHANDLE1);
          close (FILEHANDLE2);
          unlink ( 'log' );
          }
          } # end sub

          Comment

          • David T. Ashley

            #6
            Re: Trimming a large log file?

            <veg_all@yahoo. comwrote in message
            news:1164842816 .629524.230140@ l39g2000cwd.goo glegroups.com.. .
            Thanks for the replies. Here is what I came up with in Perl : Ill
            transfer to php since I need it there as well.
            >
            sub check_log_size {
            >
            my $size = ( -s 'log' ) / 1024 ;
            $size = int $size;
            >
            if ( $size 25 ) {
            open (FILEHANDLE1, "log" ) || &die_sub ( "Cant open log");
            open (FILEHANDLE2, ">log_old" );
            while (<FILEHANDLE1> ) { print FILEHANDLE2 $_; }
            *
            close (FILEHANDLE1);
            *
            close (FILEHANDLE2);
            *
            unlink ( 'log' );
            }
            } # end sub
            >
            The solution you proposed above probably has race conditions. Specifically,
            if a log entry is made (by another process) at the points I've marked with
            an asterisk above, you will lose some lines from the log file.

            The more traditional solution is just to rename the log file to a new name
            (i.e. something like "mv log log_old"). In other words, no copy, just "mv".

            Each process that writes the log file has code like this (and I could be
            wrong on the exact form, too lazy to look up the 'C' library calls).

            handle = fopen("log", "a"); /* Note the "a" for append */
            fprintf(handle, "This is my log file entry.\n");
            fclose(handle);

            The rationale--and somebody please whack me if I'm wrong--is that Unix file
            semantics guarantee that when one process(es) opens a file for append and
            another process renames it, one process or the other will win. If the "mv"
            happens after the "fopen" above but before the "fprintf", then the log entry
            will be appended to the renamed file. The next call to fopen in append mode
            will create the new current log file (to replace the one that was renamed).

            So, when you "mv" a file in order to rotate logs, there will in practice be
            a fraction of a second after the rename where some pending log entries will
            get written to the renamed file, but none will be lost.

            This traces to Unix file semantics, inodes, and all that.

            I'm sure Perl has an "mv" function that maps to the operating system's mv
            functionality.

            Dave.



            Comment

            • David T. Ashley

              #7
              Re: Trimming a large log file?

              "David T. Ashley" <dta@e3ft.comwr ote in message
              news:giCbh.3572 $xf7.2887@fe81. usenetserver.co m...
              The solution you proposed above probably has race conditions.
              Specifically, if a log entry is made (by another process) at the points
              I've marked with an asterisk above, you will lose some lines from the log
              file.
              Also, if you're seeing this problem for the first time, this might help:



              The operating system will assign CPU time to processes in unpredictable
              patterns, so if one process stops running and the other goes at the points
              I've marked with an asterisk ...

              The "mv" solution is robust because Unix was designed that way ... the
              solution you proposed probably has a race conditions that may cause log file
              lines to be lost.



              Comment

              • mouse

                #8
                Re: Trimming a large log file?

                In article <waDbh.1716$s04 .10@fe78.usenet server.com>, dta@e3ft.com
                says...
                "David T. Ashley" <dta@e3ft.comwr ote in message
                news:giCbh.3572 $xf7.2887@fe81. usenetserver.co m...
                The solution you proposed above probably has race conditions.
                Specifically, if a log entry is made (by another process) at the points
                I've marked with an asterisk above, you will lose some lines from the log
                file.
                >
                Also, if you're seeing this problem for the first time, this might help:
                >

                >
                The operating system will assign CPU time to processes in unpredictable
                patterns, so if one process stops running and the other goes at the points
                I've marked with an asterisk ...
                >
                The "mv" solution is robust because Unix was designed that way ... the
                solution you proposed probably has a race conditions that may cause log file
                lines to be lost.
                >
                >
                >
                >
                pseudocode
                start
                If filesize 2Mb rename log.txt log.bak
                end;

                Comment

                Working...