Cleanup patterns

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Default User

    #61
    Re: Cleanup patterns

    CBFalconer wrote:
    Keith Thompson wrote:
    ... snip ...
    waste of time -- just as any insurance policy that doesn't pay off
    is a waste of money. (Life insurance is a really bad deal if you
    happen to be immortal.)
    >
    Now I know why I haven't had any for the past 20 years :-)
    Because you're immortal? Wow.
    (Apart
    from the Social Security death 'benefit').

    I have some freebie insurance from work, some multiplier times your
    yearly salary.





    Brian

    Comment

    • Keith Thompson

      #62
      Re: Cleanup patterns

      "Default User" <defaultuserbr@ yahoo.comwrites :
      CBFalconer wrote:
      >
      >Keith Thompson wrote:
      >
      >... snip ...
      waste of time -- just as any insurance policy that doesn't pay off
      is a waste of money. (Life insurance is a really bad deal if you
      happen to be immortal.)
      >>
      >Now I know why I haven't had any for the past 20 years :-)
      >
      Because you're immortal? Wow.
      So far ...

      --
      Keith Thompson (The_Other_Keit h) kst-u@mib.org <http://www.ghoti.net/~kst>
      San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
      We must do something. This is something. Therefore, we must do this.

      Comment

      • CBFalconer

        #63
        Re: Cleanup patterns

        Keith Thompson wrote:
        "Default User" <defaultuserbr@ yahoo.comwrites :
        >CBFalconer wrote:
        >>Keith Thompson wrote:
        >>>
        >>... snip ...
        >>>waste of time -- just as any insurance policy that doesn't pay off
        >>>is a waste of money. (Life insurance is a really bad deal if you
        >>>happen to be immortal.)
        >>>
        >>Now I know why I haven't had any for the past 20 years :-)
        >>
        >Because you're immortal? Wow.
        >
        So far ...
        Well, I only have experience to go by, and so far there have been
        no exceptions. I've had a couple of near misses lately though. Is
        this an omen? Maybe we need an ISO standard for reference.

        --
        Chuck F (cbfalconer at maineline dot net)
        Available for consulting/temporary embedded and systems.
        <http://cbfalconer.home .att.net>


        Comment

        • Richard Bos

          #64
          Re: Cleanup patterns

          rpbg123@yahoo.c om (Roland Pibinger) wrote:
          On Mon, 11 Dec 2006 07:41:06 GMT, Richard Bos wrote:
          The problem with xmalloc()-like functions is that they crash anyway;
          they just crash with a predictable message.
          >
          They don't crash the program, they call exit (which also calls
          atexit).
          To the end user, it's usually all the same.
          This _may_ be good enough,
          but IMO not nearly as often as I see it done in the wild.
          >
          In order to run your program primarily needs one resource, memory.
          When it runs out of memory (e.g. due to a memory leak)
          Well, _yes_. If you assume rampant bug that no C programmer should let
          into a production program any more (and, therefore, some famous
          companies regularly do), you might as well terminate the program there
          and then. In fact, why not be pro-active, and terminate it at random
          intervals?
          there is hardly anything you can do except to (more or less)
          gracefully terminate the program.
          As shown upthread, for a _well written_ program, this is simply untrue.
          It _may_ be the case, but often you have better options.
          Not even that is always possible since some OS (IIRC, Linux)
          never return NULL for malloc even when memory is exhausted.
          Those OSes are broken. If you can't trust your OS not to lie to you, why
          trust your computer at anything?

          Richard

          Comment

          • Roland Pibinger

            #65
            Re: Cleanup patterns

            On Mon, 11 Dec 2006 09:30:03 GMT, Keith Thompson wrote:
            >That behavior is arguably non-conforming.
            <offtopic>
            When Linux Runs Out of Memory
            Now, next, and beyond: Tracking need-to-know trends at the intersection of business and technology

            </offtopic>

            Comment

            • Eric Sosman

              #66
              [OT] Re: Cleanup patterns

              Roland Pibinger wrote:
              On Mon, 11 Dec 2006 09:30:03 GMT, Keith Thompson wrote:
              >That behavior is arguably non-conforming.
              >
              <offtopic>
              When Linux Runs Out of Memory
              Now, next, and beyond: Tracking need-to-know trends at the intersection of business and technology

              </offtopic>
              <offtopic>
              Thomas Habets had an unfortunate experience recently. His Linux system ran out of memory, and [...]

              </offtopic>

              --
              Eric Sosman
              esosman@acm-dot-org.invalid

              Comment

              • Randy Howard

                #67
                Re: [OT] Re: Cleanup patterns

                Eric Sosman wrote
                (in article <P8GdnXLDB9if_x nYnZ2dnUVZ_h7in Z2d@comcast.com >):
                OT, Perhaps. Hilarious? Definitely.

                Over-engineering a solution to a problem that doesn't even need
                to exist.

                --
                Randy Howard (2reply remove FOOBAR)
                "The power of accurate observation is called cynicism by those
                who have not got it." - George Bernard Shaw





                Comment

                • Bill Reid

                  #68
                  Re: Cleanup patterns


                  slebetman@yahoo .com <slebetman@gmai l.comwrote in message
                  news:1165824537 .367887.101780@ 79g2000cws.goog legroups.com...
                  Bill Reid wrote:
                  Richard Heathfield <rjh@see.sig.in validwrote in message
                  news:frqdnagbYr zNFPPYnZ2dnUVZ8 tSdnZ2d@bt.com. ..
                  MQ said:
                  Bill Reid wrote:

                  For more complex functions, you'll quite often have contigent
                  dependancies for each resource acquisition that don't fit neatly
                  into
                  the "nest", and the overhead of the function calling mechanism
                  means that functions are not infinitely or always a "fabulous
                  mechanism" for something as silly as "keeping the indent
                  level down" (unless you like your "C" code to run as slow
                  as "Java"!).

                  These are my greatest concerns, as I am writing file system driver
                  code
                  that needs to be fast efficient.
                  >
                  Rule 1: if it doesn't work, it doesn't matter how fast it is.
                  Rule 2: it is easier to make a correct program fast than it is to make
                  a
                  fast program correct.
                  Rule 3: it is easier to make a readable program correct than it is to
                  make
                  a
                  correct program readable.
                  >
                  I think these are actually the three rules for pontificating about
                  programming instead of actually programming...y ears ago I read
                  the REAL rules for writing "correct" "fast" and "efficient" code, and
                  of course, as everybody who's not just pontificating knows,
                  it boils down to one general rule: pick the two you really want, because
                  a lot of times you CAN'T have all three...
                  And that's before we get anywhere near the Two Rules of
                  Micro-Optimisation.
                  For completeness, these are:
                  >
                  Rule 1: Don't do it.
                  Rule 2 (for experts only): Don't do it yet.
                  >
                  I certainly wasn't talking about anything like "micro-optimization".
                  I'm talking about great big gobs of "MACRO-optimization", like
                  slowing down or speeding up what was virtually the IDENTICAL
                  code by a factor of up to ten-fold (I could make the same program run
                  in either two seconds or 20 seconds)...
                  >
                  On a lot of modern CPUs the difference between a function call and a
                  jump (as in generated by switch or if) is nowhere near ten-fold when
                  compiled with an properly optimising compiler. One fold at most (note
                  that one-fold is ten times slower)
                  OK, I'm officially confused...I specifically said "two seconds or 20
                  seconds" as being "ten-fold" (as in, a thousand is a hundred "ten-fold"),
                  then you re-define "fold" as "ten times slower". So what you're saying
                  is that with a "properly optimising compiler", a function call is UP TO
                  "ten times slower"...which is basically what I was always told, it takes
                  somewhere around 4-8 extra cycles as overhead for a function call,
                  so I think we don't have a disagreement there?
                  or more typically up to 4 times
                  slower.
                  "6" was the magic number I was told (or actually read) to work with...
                  Indeed on at least two architectures I code for, a function
                  call (if not recursive) compiles to execute in exactly the same number
                  of CPU cycles as a regular jump/branch.
                  >
                  This may be true, and I don't doubt that it depends on the specific
                  function call and optimization "tricks" of the compiler in any event...
                  This is not exactly an academic exercise for me, since a lot of the
                  code I work with takes several HOURS to run on a Pentium-class
                  machine, and I have to run it EVERY day.
                  >
                  Of course, if you're programming for an architecture as archaic and
                  register-starved as the Pentium then function call overhead can be an
                  issue.
                  Yeah, slam the Pentium, then go ahead and slam "MIPS" and "SPARC"
                  while you're at it, since I've been able to significantly speed up or slow
                  down programs on all of them (in many cases, the exact same program
                  on all the different architectures).
                  Even so, improvements have been made to x86-64 which makes the
                  function call overhead for x86-64 even less than before.
                  >
                  I will admit that the ever-increasing power and speed of computer
                  hardware does make a lot of these considerations practically moot...but
                  that still doesn't excuse how I saw people coding back when it made
                  a TREMENDOUS practical difference, and these WERE what had
                  to be considered the elite systems software engineers in the entire
                  world...

                  ---
                  William Ernest Reid



                  Comment

                  • Bill Reid

                    #69
                    Re: Cleanup patterns


                    Ian Collins <ian-news@hotmail.co mwrote in message
                    news:4u4j9eF167 kqiU4@mid.indiv idual.net...
                    Bill Reid wrote:

                    This is not exactly an academic exercise for me, since a lot of the
                    code I work with takes several HOURS to run on a Pentium-class
                    machine, and I have to run it EVERY day.
                    >
                    Maybe it's time for a 64 bit upgrade!
                    >
                    Then I'll re-write it all in Java!

                    ---
                    William Ernest Reid



                    Comment

                    • slebetman@yahoo.com

                      #70
                      Re: Cleanup patterns

                      Bill Reid wrote:
                      slebetman@yahoo .com <slebetman@gmai l.comwrote in message
                      news:1165824537 .367887.101780@ 79g2000cws.goog legroups.com...
                      Bill Reid wrote:
                      I certainly wasn't talking about anything like "micro-optimization".
                      I'm talking about great big gobs of "MACRO-optimization", like
                      slowing down or speeding up what was virtually the IDENTICAL
                      code by a factor of up to ten-fold (I could make the same program run
                      in either two seconds or 20 seconds)...
                      On a lot of modern CPUs the difference between a function call and a
                      jump (as in generated by switch or if) is nowhere near ten-fold when
                      compiled with an properly optimising compiler. One fold at most (note
                      that one-fold is ten times slower)
                      >
                      OK, I'm officially confused...I specifically said "two seconds or 20
                      seconds" as being "ten-fold" (as in, a thousand is a hundred "ten-fold"),
                      then you re-define "fold" as "ten times slower".
                      Sorry, I was confusing "fold" with "orders of magnitude". Fold, proper,
                      commonly means multiplication by that factor which makes what you said
                      correct and what I said wrong.
                      or more typically up to 4 times
                      slower.
                      >
                      "6" was the magic number I was told (or actually read) to work with...
                      >
                      Indeed on at least two architectures I code for, a function
                      call (if not recursive) compiles to execute in exactly the same number
                      of CPU cycles as a regular jump/branch.
                      This may be true, and I don't doubt that it depends on the specific
                      function call and optimization "tricks" of the compiler in any event...
                      On a lot of modern architectures (this excludes x86-32 although it
                      includes x86-64) this is quite trivial (see below).
                      This is not exactly an academic exercise for me, since a lot of the
                      code I work with takes several HOURS to run on a Pentium-class
                      machine, and I have to run it EVERY day.
                      Of course, if you're programming for an architecture as archaic and
                      register-starved as the Pentium then function call overhead can be an
                      issue.
                      Even so, improvements have been made to x86-64 which makes the
                      function call overhead for x86-64 even less than before.
                      I will admit that the ever-increasing power and speed of computer
                      hardware does make a lot of these considerations practically moot...but
                      The point is not the speed of the hardware. No matter how fast the
                      hardware becomes a ten fold slowdown is STILL a ten fold slowdown. The
                      point is that x86-64 is not register starved allowing the compiler to
                      avoid using stacks to pass parameters and especially for the kind of
                      trivial functions we were discussing can also allow the compiler to do
                      *nothing* to pass parameters via overlays.

                      Hence for the kinds of functions we're discussing (small utility
                      functions to reduce indentation, remember that we're not discussing
                      function calling *in general*) passing parameters can be optimised to
                      generate zero instructions and calling the function itself generates
                      one instruction. On most modern machines that instruction takes the
                      same amount of time to execute as a conditional branch.
                      that still doesn't excuse how I saw people coding back when it made
                      a TREMENDOUS practical difference, and these WERE what had
                      to be considered the elite systems software engineers in the entire
                      world...
                      And now we have the conclusion. Your trick programming and inlining
                      WERE useful in the age of dino-mainframes where workstation CPUs were
                      nothing more than glorified microcontroller s. Today even lowly 50 cent
                      microcontroller s don't break a sweat executing deeply nested function
                      calls (unless you're doing something silly like directly generating
                      video signals on output pins without the help of graphics hardware).

                      Comment

                      Working...