Can a double always represent an int exactly?

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • Keith Thompson

    #16
    Re: Can a double always represent an int exactly?

    "dandelion" <dandelion@mead ow.net> writes:[color=blue]
    > "Fred Ma" <fma@doe.carlet on.ca> wrote in message[color=green]
    >> dandelion wrote:[color=darkred]
    >> >
    >> > M_PI/M_PI seldomly equals 1.000000.[/color]
    >>
    >> I imagine that would depend on how division is implemented.[/color]
    >
    > Of course, that's why I wrote "seldomly". And which implementation would
    > return 1.000000, exactly? I'm curious. Try a few CPU's/FPU's and check the
    > results. I'll buy you a beer if
    > you find one.[/color]

    I just tried this on a wide variety of systems; M_PI/M_PI compares
    equal to 1.0 on all but one of them. (The exception was a Cray SV1.)

    Here's the program I used:

    #include <stdio.h>
    #include <math.h>
    int main(void)
    {
    double var_M_PI = M_PI;
    double ratio = M_PI / M_PI;
    double var_ratio = var_M_PI / var_M_PI;
    printf("M_PI = %g\n", M_PI);
    printf("var_M_P I = %g\n", var_M_PI);
    printf("ratio = %g\n", ratio);
    printf("ratio %s 1.0\n", ratio == 1.0 ? "==" : "!=");
    printf("var_rat io = %g\n", var_ratio);
    printf("var_rat io %s 1.0\n", var_ratio == 1.0 ? "==" : "!=");
    return 0;
    }

    Caveats: A moderately clever compiler could compute the value at
    compilation time (I didn't check this, but I didn't use any
    optimization options). And of course M_PI is non-standard.

    --
    Keith Thompson (The_Other_Keit h) kst-u@mib.org <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.

    Comment

    • Fred Ma

      #17
      Re: Can a double always represent an int exactly?

      Keith Thompson wrote:[color=blue]
      >
      > "dandelion" <dandelion@mead ow.net> writes:[color=green]
      > > "Fred Ma" <fma@doe.carlet on.ca> wrote in message[color=darkred]
      > >> dandelion wrote:
      > >> >
      > >> > M_PI/M_PI seldomly equals 1.000000.
      > >>
      > >> I imagine that would depend on how division is implemented.[/color]
      > >
      > > Of course, that's why I wrote "seldomly". And which implementation would
      > > return 1.000000, exactly? I'm curious. Try a few CPU's/FPU's and check the
      > > results. I'll buy you a beer if
      > > you find one.[/color]
      >
      > I just tried this on a wide variety of systems; M_PI/M_PI compares
      > equal to 1.0 on all but one of them. (The exception was a Cray SV1.)
      >
      > Here's the program I used:
      >
      > #include <stdio.h>
      > #include <math.h>
      > int main(void)
      > {
      > double var_M_PI = M_PI;
      > double ratio = M_PI / M_PI;
      > double var_ratio = var_M_PI / var_M_PI;
      > printf("M_PI = %g\n", M_PI);
      > printf("var_M_P I = %g\n", var_M_PI);
      > printf("ratio = %g\n", ratio);
      > printf("ratio %s 1.0\n", ratio == 1.0 ? "==" : "!=");
      > printf("var_rat io = %g\n", var_ratio);
      > printf("var_rat io %s 1.0\n", var_ratio == 1.0 ? "==" : "!=");
      > return 0;
      > }
      >
      > Caveats: A moderately clever compiler could compute the value at
      > compilation time (I didn't check this, but I didn't use any
      > optimization options). And of course M_PI is non-standard.[/color]


      In Canada, Moosehead beer is pretty good. :)

      Seriously, I wasn't implying that practical implementations of
      division were necessarily sophisticated enough to recognize
      equivalence of numerator and denominator. What I should ahve
      said was that I can see such a discrepancy arising, since
      division is not straightforward to implement. I'm talking about
      cases that aren't optimized away at compile time.

      Fred

      Comment

      • Chris Torek

        #18
        Re: Can a double always represent an int exactly?

        A few minor corrections...

        In article <cl9nfk0rqd@new s2.newsguy.com> I wrote (in part):[color=blue][color=green]
        >>.. a double has nonuniform precision throughout its value range.[/color]
        >
        >This is correct (well, I can imagine a weird implementation that
        >deliberately makes "double"s have constant precision by often
        >wasting a lot of space; it seems quite unlikely though).[/color]

        It occurs to me now that "precision" is not properly defined here.
        When dealing with scientific notation and decimal numbers, something
        like 1.23e+10 is less precise than 1.230e+10. The precision here
        is determined by the number of digits in the mantissa (which is
        why we have to use the "e+10" notation to suppress "unwanted"
        trailing zeros).

        Using this definition of precision, and keeping in mind that most
        computers today use powers of 2 (binary floating point) rather than
        powers of ten (decimal floating point), we actually do have "constant
        precision", such as "always exactly 24 bits of mantissa" (provided
        we ignore those pesky "denorms" :-) ).

        This is of course not what the original poster and I meant by
        "precision" (as illustrated below) -- we were referring to digits
        beyond the decimal point after conversion to printed form via "%f",
        for instance. Note, however, that IBM "hex float" (as used on
        S/360 -- floating point with a radix of 16 instead of 2) really
        *does* have "precision wobble": the number of "useful" bits in the
        mantissa changes as numbers change in magnitude. This gives the
        numerical analysis folks headaches. IEEE floating point is rather
        better behaved.

        I need to fix one more typo though:
        [color=blue]
        >... [using] "float" ... on most of today's implementations .
        >Here, we get "interestin g" effects near 8388608.0 and 16777216.0.
        >Values below 16777216.0 step by ones: 8388608.0 is followed
        >immediately by 8388609.0, for instance, and 16777215.0 is followed
        >immediately by 16777216.0. On the other hand, below (float)(1<<23)
        >or above (float)(1<<24), we step by 1/2 or 2 respectively. Using
        >nextafterf() (if you have it) and variables set to the right values,
        >you might printf() some results and find:
        >
        > nextafterf(8388 608.0, -inf) = 8388607.5
        > nextafterf(1677 7216.0, +inf) = 16777216.2[/color]

        This last line should read:

        nextafterf(1677 7216.0, +inf) = 16777218.0

        (I typed this all in manually, rather than writing C code to
        call nextafterf(), display the results as above, and then
        cut-and-paste -- so I added 0.2 instead of 2.0 when I made
        the change by hand.)
        --
        In-Real-Life: Chris Torek, Wind River Systems
        Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
        email: forget about it http://web.torek.net/torek/index.html
        Reading email is like searching for food in the garbage, thanks to spammers.

        Comment

        • Fred Ma

          #19
          Re: Can a double always represent an int exactly?

          Chris Torek wrote:[color=blue]
          >
          > A few minor corrections...
          >
          > In article <cl9nfk0rqd@new s2.newsguy.com> I wrote (in part):[color=green][color=darkred]
          > >>.. a double has nonuniform precision throughout its value range.[/color]
          > >
          > >This is correct (well, I can imagine a weird implementation that
          > >deliberately makes "double"s have constant precision by often
          > >wasting a lot of space; it seems quite unlikely though).[/color]
          >
          > It occurs to me now that "precision" is not properly defined here.
          > When dealing with scientific notation and decimal numbers, something
          > like 1.23e+10 is less precise than 1.230e+10. The precision here
          > is determined by the number of digits in the mantissa (which is
          > why we have to use the "e+10" notation to suppress "unwanted"
          > trailing zeros).
          >
          > Using this definition of precision, and keeping in mind that most
          > computers today use powers of 2 (binary floating point) rather than
          > powers of ten (decimal floating point), we actually do have "constant
          > precision", such as "always exactly 24 bits of mantissa" (provided
          > we ignore those pesky "denorms" :-) ).
          >
          > This is of course not what the original poster and I meant by
          > "precision" (as illustrated below) -- we were referring to digits
          > beyond the decimal point after conversion to printed form via "%f",
          > for instance. Note, however, that IBM "hex float" (as used on
          > S/360 -- floating point with a radix of 16 instead of 2) really
          > *does* have "precision wobble": the number of "useful" bits in the
          > mantissa changes as numbers change in magnitude. This gives the
          > numerical analysis folks headaches. IEEE floating point is rather
          > better behaved.
          >
          > I need to fix one more typo though:
          >[color=green]
          > >... [using] "float" ... on most of today's implementations .
          > >Here, we get "interestin g" effects near 8388608.0 and 16777216.0.
          > >Values below 16777216.0 step by ones: 8388608.0 is followed
          > >immediately by 8388609.0, for instance, and 16777215.0 is followed
          > >immediately by 16777216.0. On the other hand, below (float)(1<<23)
          > >or above (float)(1<<24), we step by 1/2 or 2 respectively. Using
          > >nextafterf() (if you have it) and variables set to the right values,
          > >you might printf() some results and find:
          > >
          > > nextafterf(8388 608.0, -inf) = 8388607.5
          > > nextafterf(1677 7216.0, +inf) = 16777216.2[/color]
          >
          > This last line should read:
          >
          > nextafterf(1677 7216.0, +inf) = 16777218.0
          >
          > (I typed this all in manually, rather than writing C code to
          > call nextafterf(), display the results as above, and then
          > cut-and-paste -- so I added 0.2 instead of 2.0 when I made
          > the change by hand.)[/color]

          Chris, thanks for the correction. I think I got the gist of
          it from your original post. I did a blanket reply elaborating on it,
          Fri. Oct. 22 Message-ID <41788D70.E6629 9A5@doe.carleto n.ca>. Thanks
          for helping me get my brain around it, and if you have any comments
          on that, I'm certainly interested.

          Fred

          Comment

          • Dik T. Winter

            #20
            Re: Can a double always represent an int exactly?

            In article <4179B22E.4B2C5 B2F@doe.carleto n.ca> Fred Ma <fma@doe.carlet on.ca> writes:
            ....[color=blue]
            > Seriously, I wasn't implying that practical implementations of
            > division were necessarily sophisticated enough to recognize
            > equivalence of numerator and denominator. What I should ahve
            > said was that I can see such a discrepancy arising, since
            > division is not straightforward to implement. I'm talking about
            > cases that aren't optimized away at compile time.[/color]

            It is not straightforward to implement. Nevertheless, whenever the FPU
            conforms to the IEEE standard the division *must* deliver the exact
            answer if the quotient is representable. So on all systems using such
            FPU's (and that is the majority at this moment) should deliver 1.0 when
            confronted with a/a, in whatever way it is disguised. To get division
            right is not straigthforward , but it is not so very difficult either.

            That Keith Thompson found that it was not the case on a Cray SV1 is
            entirely because that system has not an IEEE conforming floating point
            system. (That machine does not have a divide instruction. It
            calculates an approximation of the inverse of the denominator and
            multiplies with the numerator, and one Newton iteration is performed.
            Due to some quirks it may give an inexact result. If I remember
            right, the smallest integral division that is inexact is 17.0/17.0.
            --
            dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
            home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/

            Comment

            • Nils O. Selåsdal

              #21
              Re: Can a double always represent an int exactly?

              dandelion wrote:[color=blue]
              > "Fred Ma" <fma@doe.carlet on.ca> wrote in message
              > news:4178F6F1.2 12A0C4E@doe.car leton.ca...
              >[color=green]
              >>dandelion wrote:
              >>[color=darkred]
              >>>M_PI/M_PI seldomly equals 1.000000.[/color]
              >>
              >>I imagine that would depend on how division is implemented.
              >>
              >>Fred[/color]
              >
              >
              >
              > ----- Original Message -----
              > From: "Fred Ma" <fma@doe.carlet on.ca>
              > Newsgroups: comp.lang.c
              > Sent: Friday, October 22, 2004 2:03 PM
              > Subject: Re: Can a double always represent an int exactly?
              >
              >
              >[color=green]
              >>dandelion wrote:
              >>[color=darkred]
              >>>M_PI/M_PI seldomly equals 1.000000.[/color]
              >>
              >>I imagine that would depend on how division is implemented.[/color]
              >
              >
              > Of course, that's why I wrote "seldomly". And which implementation would
              > return 1.000000, exactly? I'm curious. Try a few CPU's/FPU's and check the
              > results. I'll buy you a beer if
              > you find one.[/color]
              This one does it for me.. is it the printf fscking up, or ..

              #include <math.h>
              #include <stdio.h>

              float divide_me(float f){
              return f/M_PI;
              }

              int main()
              {

              printf("%.26f\n ",divide_me(M_P I));

              return 0;
              }

              Comment

              • dandelion

                #22
                Re: Can a double always represent an int exactly?

                [color=blue]
                > Caveats: A moderately clever compiler could compute the value at
                > compilation time (I didn't check this, but I didn't use any
                > optimization options). And of course M_PI is non-standard.[/color]


                Your beer is waiting... Nice and cold. You earned it, I picked a silly
                example.




                Comment

                • Fred Ma

                  #23
                  Re: Can a double always represent an int exactly?

                  "Dik T. Winter" wrote:[color=blue]
                  >
                  > In article <4179B22E.4B2C5 B2F@doe.carleto n.ca> Fred Ma <fma@doe.carlet on.ca> writes:
                  > ...[color=green]
                  > > Seriously, I wasn't implying that practical implementations of
                  > > division were necessarily sophisticated enough to recognize
                  > > equivalence of numerator and denominator. What I should ahve
                  > > said was that I can see such a discrepancy arising, since
                  > > division is not straightforward to implement. I'm talking about
                  > > cases that aren't optimized away at compile time.[/color]
                  >
                  > It is not straightforward to implement. Nevertheless, whenever the FPU
                  > conforms to the IEEE standard the division *must* deliver the exact
                  > answer if the quotient is representable. So on all systems using such
                  > FPU's (and that is the majority at this moment) should deliver 1.0 when
                  > confronted with a/a, in whatever way it is disguised. To get division
                  > right is not straigthforward , but it is not so very difficult either.
                  >
                  > That Keith Thompson found that it was not the case on a Cray SV1 is
                  > entirely because that system has not an IEEE conforming floating point
                  > system. (That machine does not have a divide instruction. It
                  > calculates an approximation of the inverse of the denominator and
                  > multiplies with the numerator, and one Newton iteration is performed.
                  > Due to some quirks it may give an inexact result. If I remember
                  > right, the smallest integral division that is inexact is 17.0/17.0.
                  > --
                  > dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
                  > home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/[/color]


                  Thanks for that. It's useful to know.

                  Fred

                  Comment

                  Working...