"cost" of generic dictionaries ?

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Bill Woodruff

    "cost" of generic dictionaries ?

    For me Generic Dictionaries in .NET (I'm still back with 2.0 as yet ... I do
    play in my favorite key of C#, thanks) are very tasty, very useful.

    But, I do find myself wondering what the "overhead" (memory, look-up times)
    in using them is like. And, as often as I admonish myself that I really
    should go look at what the compiler writes, as often shirk descent into
    those op-code lands. Yes, I know I would be a better person if I got L.
    Roeder's "Reflector" and ... :)

    Obviously using a value type (including structs) as keys you are not going
    boxing, as you would have done using the older flavour of non-generic
    dictionary. For reference types used as keys I am less certain; I am only
    aware that you might have to roll your own comparer depending on what you
    were storing and ...

    But, let's say that you have a program that is doing a lot multiple look-ups
    in GD's. Perhaps even one of the GD involved has keys, or values, or both,
    that is a GD.

    Appreciate hearing if anyone has developed any heuristics about the
    strategic use of GD's given certain types of application scenarios.

    A friend recently gave me a talking-to about using 'foreach : telling me it
    was much better to just go ahead and do a for loop in terms of performance.
    This has stimulated my interest in thinking about other lately arrived .NET
    features like GD's in terms of memory use and look-up times..

    thanks !

    Bill


  • Marc Gravell

    #2
    Re: "cost&quot ; of generic dictionaries ?

    avoiding the virtcalls...
    Or pehaps (more likely?) it was the JIT doing its thing; but either way,
    there really isn't much in it.

    I also forgot to say that "foreach" better expresses intent; consider a
    linked list that provided an int indexer. I would expect "foreach" to simply
    walk the nodes yielding (very quick and simple) - however, it is also
    entirely likely that the indexer starts at the beginning each time (since
    there is no notional state to retain) - hence the indexer (for) would
    actually telescope with O(n^2) performance, where-as foreach would be linear
    O(n). Of course, the standard LinkedList<Tdoe sn't provide an indexer, so
    it is a mute point - but ir highlights that you should be worrying about
    what you want to achieve, and (until proven otherwise by repeatable
    benchmarks) let the runtime and base-libraries worry about the rest.

    Anything else is premature optimisation.

    Marc


    Comment

    Working...