Proper Usage of Shared Memory

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • myren, lord

    Proper Usage of Shared Memory

    When I first discovered shared memory (between multiple processes) I
    immediately started thinking of how to build my own VM subsystem +
    locking mechanisms for a large single block of memory. This seems like
    one option, the other appears to be just having each "object" you want
    to share be a shared mem space to itself: allocate objects into a
    defined shared mem space. But here you have many many objects being
    shared. Having a VM subsystem would allow you to just allocate a large
    contiguous chunk of memory and then rely on your program for allocation,
    deallocation, spatial locality/coherence and all other matters.

    Is this at all a wise idea? Is there any advantage over having a large
    number of small shared memory objects? What overhead does having shared
    memory objects incur?

    Sometimes I think the answer comes down to one merely of locking: that
    with a complex vm subsystem mapped on a single flat space you could tune
    far better far finer grained locking for your system. How much more is
    at stake than simply locking? Obviously difficulty of implementation is
    a key factor, but what about technical advantages and disadvantages? is
    there a penalty for having thousands of shared memory objects between a
    collection of programs?

    Myren
  • Thomas Matthews

    #2
    Re: Proper Usage of Shared Memory

    myren, lord wrote:[color=blue]
    > When I first discovered shared memory (between multiple processes) I
    > immediately started thinking of how to build my own VM subsystem +
    > locking mechanisms for a large single block of memory. This seems like
    > one option, the other appears to be just having each "object" you want
    > to share be a shared mem space to itself: allocate objects into a
    > defined shared mem space. But here you have many many objects being
    > shared. Having a VM subsystem would allow you to just allocate a large
    > contiguous chunk of memory and then rely on your program for allocation,
    > deallocation, spatial locality/coherence and all other matters.[/color]

    First off, this has nothing to do with the C++ language.
    Probably a better newsgroup is news:comp.progr amming.
    [color=blue]
    >
    > Is this at all a wise idea?[/color]

    Maybe, maybe not. You'll have to check your operating system to see
    how it handles the memory request. Some OSes already have virtual
    memory, so when you allocate a contiguous chunk, it may not be in
    phyiscal memory; or another task may be using that memory.

    [color=blue]
    > Is there any advantage over having a large
    > number of small shared memory objects?[/color]

    Research the topic of "Memory Fragmentation".

    [color=blue]
    > What overhead does having shared memory objects incur?[/color]

    The minimal overhead is using semaphores, signals or mutexes.
    One must be sure that two tasks are not writing to the same
    memory at the same time.

    [color=blue]
    > Sometimes I think the answer comes down to one merely of locking: that
    > with a complex vm subsystem mapped on a single flat space you could tune
    > far better far finer grained locking for your system. How much more is
    > at stake than simply locking? Obviously difficulty of implementation is
    > a key factor, but what about technical advantages and disadvantages? is
    > there a penalty for having thousands of shared memory objects between a
    > collection of programs?
    >
    > Myren[/color]


    --
    Thomas Matthews

    C++ newsgroup welcome message:

    C++ Faq: http://www.parashift.com/c++-faq-lite
    C Faq: http://www.eskimo.com/~scs/c-faq/top.html
    alt.comp.lang.l earn.c-c++ faq:

    Other sites:
    http://www.josuttis.com -- C++ STL Library book

    Comment

    Working...