Linux, Perl, and Memory problem

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Myron Turner

    Linux, Perl, and Memory problem

    I'm not sure whether this question belongs entirely here or in a perl
    group--but probably it requires knowledge of both.

    I've written a perl module, currently in use, which does asynchronous
    searches of library databases anywhere in the world. It forks off a
    separate process for each database which it contacts. In the past
    I've had it search successfuly through more than 1000 databases,
    reporting back one one record from each.

    The processes are forked out 5 at a time, every 25 seconds. The
    forked processes are controlled by Event timers, so that they time
    out after 20 seconds, if they can't connect to the server they are
    supposed to query. But if they do connect, they are left in play.
    This means that a backlog of forked processes cand build up while
    waiting to get the records they've requested, especially when querying
    large numbers.

    Each record averages about 1k . There is, in addition, about 5k
    overhead for each forked process. Recently, I've wanted to use this
    module to again query upwards of 1000 databases but to bring back
    between 10 and 25 records. So, we now have as much as 30k devoted to
    each forked process. The result was a great deal of disk thrashing
    and repeated reports to the terminal from the operating system that it
    it was out of memory and had killed one of the forked processes.
    During this time the terminal was essentially locked and wouldn't
    respond to the keyboard, so there was nothing I could do but wait or
    reboot. The disk thrashing I assuming was a sign of memory swapping.


    I tried to solve the problem by running a copy of top from the main
    program before forking off each batch of 5 forked processes. I
    examined the outupt from top to determined whether the main process
    had gone above 60% of memory capacity. If so, I implement a 5 minute
    timeout period. This worked like a charm. But when I looked at what
    was really hapening, it turned out that the sleep period never had to
    be set. Memory never exceeded about 5% of the total memory resources.

    So, I thought, maybe it just needs the extra time between each batch
    of 5 forks. Instead of using the call to top, I implemented a 60
    second timeout between each batch of 5 forks. This was better than
    nothing but there was nevertheless a significant memory drain and
    eventually the terminal froze. By using the ps command I could see
    that there was a huge backlog of forked processes in memory--this was
    not the case when I used top. And the access to top took only 6
    seconds.

    So, it seems that there something soothing to the operating system in
    running the external program--top--which has nothing to do with giving
    the system more time to processes the forks.

    Myron Turner

  • Myron Turner

    #2
    Re: Linux, Perl, and Memory problem

    On Mon, 22 Mar 2004 16:11:10 GMT, mturner@ms.uman itoba.ca (Myron
    Turner) wrote:
    Sorry, I left of the last point-which is that I'd like to know what is
    happening here so that I can address the problem without just blindly
    insert a call to linux's top command between each 5 forked processes.

    Myron Turner

    Comment

    • Joe Smith

      #3
      Re: Linux, Perl, and Memory problem

      Myron Turner wrote:
      [color=blue]
      > By using the ps command I could see
      > that there was a huge backlog of forked processes in memory[/color]

      You've clearly got a logic-flow error; too may process are being
      created all at once. Fix that first.

      Try outputting debugging messages before and after each fork.

      warn "Process $$ about to fork\n";
      $child_pid = fork();
      if ($child_pid) {
      warn "Parent process $$ created child $child_pid\n";
      } else {
      warn "New child process $$ created\n";
      }

      Check the messages going to STDERR to verify that the expected
      number of child processes are being created in the proper order.
      -Joe

      P.S. Please post to comp.lang.perl. misc next time.

      Comment

      • Myron Turner

        #4
        Re: Linux, Perl, and Memory problem

        On Mon, 22 Mar 2004 20:26:38 GMT, Joe Smith <Joe.Smith@inwa p.com>
        wrote:
        [color=blue]
        >Myron Turner wrote:
        >[color=green]
        >> By using the ps command I could see
        >> that there was a huge backlog of forked processes in memory[/color]
        >
        >You've clearly got a logic-flow error; too may process are being
        >created all at once. Fix that first.
        >[/color]
        That wasn't the case. I was able to fix the problem by monitoring
        disk activity and allowing the user to tailor throughput to his/her
        own memory resources.

        Myron Turner
        Myron Turner

        Comment

        Working...