Sine code for ANSI C

Collapse
This topic is closed.
X
X
 
  • Time
  • Show
Clear All
new posts
  • CBFalconer

    #31
    Re: Sine code for ANSI C

    "P.J. Plauger" wrote:[color=blue]
    > "CBFalconer " <cbfalconer@yah oo.com> wrote in message
    >[/color]
    .... snip ...[color=blue]
    >[color=green]
    >> In a way it is unfortunate that hardware has
    >> replaced software floating point, because it makes it
    >> impracticable to fix problems.[/color]
    >
    > I don't understand that comment.[/color]

    I mean that the library/routine creator is fairly well restricted
    to the FP format supplied by the hardware, on penalty of
    horrendous time penalties.
    [color=blue]
    >[color=green]
    >> Not all applications require the same FP library, and the usual
    >> library/linker etc construction of C applications makes such
    >> customization fairly easy.[/color]
    >
    > Indeed. That's how we can sell replacement and add-on libraries.
    >[/color]
    .... snip ...[color=blue]
    >[color=green]
    >> A precision measure (count of significant bits) would be handy.[/color]
    >
    > That's the ulp measure described below.[/color]

    I meant something that was automatically associated with each
    value, and revised by the process of calculation. For example,
    multiplying something with 3 digit precision by something with two
    digit precision can have no better that 2 digit precision,
    regardless of actual significant bits generated. Subtraction of
    similar sized numbers can produce violent reductions. This is the
    sort of thing one can build into a custom software FP
    representation, rather than doing some sort of worst case analysis
    up front.

    The replacement library can hardly replace the underlying FP
    operations. barring the sort of kluges designed to use or bypass
    emulators.
    [color=blue]
    >[color=green][color=darkred]
    >>> -- We provide full accuracy only for a handful of the commonest
    >>> math functions. But we generally aim for worst-case errors of
    >>> 3 ulp (units in the least-significant place) for float, 4 for
    >>> double, and 5 for 113-bit long double. The rare occasions
    >>> where we fail to achieve this goal are around the zeros of
    >>> messy functions.[/color]
    >>
    >> Those objectives are generally broad enough to eliminate any worry
    >> over whether the next missing bit is 0 or 1.[/color]
    >
    > No, they're orthogonal to that issue, as I keep trying to explain
    > to you. We do the best we can with the function arguments given.
    > It is the inescapable responsibility of the programmer to
    > understand the effect of uncertainties in those argument values on
    > the uncertainties in the function results.[/color]

    Where you assume that those arguments are exact. In reality, if
    they come from almost any physical entity, they express some form
    of measurement range. The only exactness is in the individual
    terms of a Taylor series, for example. If you compute something
    to 3 ulp, a 1/2 bit error in the argument value is practically
    meaningless and can usually be ignored, barring a nearby
    singularity. All of which leads to the same thing.

    Without suitable actions and corrections results can have
    unexpected biases. The horrible example is failure to round (as
    did some of the early Microsoft Basic fp routines, and my first
    efforts :-). A more subtle example is rounding up (or down) from
    0.5.

    None of this is criticism, but I think it is essential to have a
    clear idea of the possible failure mechanisms of our algorithms in
    order to use them properly. Early nuclear pulse height analyzers
    gave peculiar results until the effects of differential
    non-linearity were recognized. After that the causes of such
    non-linearity were often wierd and seemingly totally disconnected.

    --
    fix (vb.): 1. to paper over, obscure, hide from public view; 2.
    to work around, in a way that produces unintended consequences
    that are worse than the original problem. Usage: "Windows ME
    fixes many of the shortcomings of Windows 98 SE". - Hutchison


    Comment

    • Christian Bau

      #32
      Re: Sine code for ANSI C

      In article <W1wlc.78210$G_ .55362@nwrddc02 .gnilink.net>,
      "P.J. Plauger" <pjp@dinkumware .com> wrote:
      [color=blue]
      > I've gone to both extremes over the past several decades.
      > Our latest math library, still in internal development,
      > can get exact function values for *all* argument values.
      > It uses multi-precision argument reduction that can gust
      > up to over 4,000 bits [sic]. "The Standard C Library"
      > represents an intermediate viewpoint -- it stays exact
      > until about half the fraction bits go away.
      >
      > I still haven't decided how hard we'll try to preserve
      > precision for large arguments in the next library we ship.[/color]

      Obviously you don't get _exact_ values for any argument except sin (0.0)
      :-).

      I guess you mean "the exact value, correctly rounded to the nearest
      valid floating point number", since libraries that round to one of the
      two nearest valid floating point numbers are already available?

      Comment

      • -wombat-

        #33
        Re: Sine code for ANSI C

        suri wrote:
        [color=blue]
        > im sorry i meant i *could not* find the file
        >
        > suri wrote:[color=green]
        >> I do know the series expansion of sine i was just interested to know how
        >> its implemented in the ansi C library. like how many terms of the
        >> infinite series are included.
        >>
        >> I have linux and use glibc. so i could find the file in the path u
        >> mentioned[/color][/color]

        I stopped using Linux shortly after growing out of puberty. I developed more
        sophisticated tastes in life... but I digress...

        As someone else pointed out, there is no canonical ANSI C implementation.
        There are various and sundry different ways of computing transcendental
        functions with various accuracies and efficiencies.

        The FreeBSD code mentions "a special Remez algorithm", but it boils down to
        a Horner method polynomial computation (and if you don't know what the
        Horner method is, google it):

        ----------------------------------------------------------------------------
        /*
        * Copyright (c) 1987, 1993
        * The Regents of the University of California. All rights reserved.
        *
        * Redistribution and use in source and binary forms, with or without
        * modification, are permitted provided that the following conditions
        * are met:
        * 1. Redistributions of source code must retain the above copyright
        * notice, this list of conditions and the following disclaimer.
        * 2. Redistributions in binary form must reproduce the above copyright
        * notice, this list of conditions and the following disclaimer in the
        * documentation and/or other materials provided with the distribution.
        * 3. All advertising materials mentioning features or use of this software
        * must display the following acknowledgement :
        * This product includes software developed by the University of
        * California, Berkeley and its contributors.
        * 4. Neither the name of the University nor the names of its contributors
        * may be used to endorse or promote products derived from this software
        * without specific prior written permission.
        *
        * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
        * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
        * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
        PURPOSE
        * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
        * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
        CONSEQUENTIAL
        * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
        * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
        * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
        STRICT
        * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
        * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
        * SUCH DAMAGE.
        *
        * @(#)trig.h 8.1 (Berkeley) 6/4/93
        */

        #include "mathimpl.h "

        vc(thresh, 2.6117239648121 182150E-1 ,b863,3f85,6ea0 ,6b02,
        -1, .85B8636B026EA0 )
        vc(PIo4, 7.8539816339744 830676E-1 ,0fda,4049,68c2 ,a221,
        0, .C90FDAA22168C2 )
        vc(PIo2, 1.5707963267948 966135E0 ,0fda,40c9,68c2 ,a221,
        1, .C90FDAA22168C2 )
        vc(PI3o4, 2.3561944901923 449203E0 ,cbe3,4116,0e92 ,f999,
        2, .96CBE3F9990E92 )
        vc(PI, 3.1415926535897 932270E0 ,0fda,4149,68c2 ,a221,
        2, .C90FDAA22168C2 )
        vc(PI2, 6.2831853071795 864540E0 ,0fda,41c9,68c2 ,a221,
        3, .C90FDAA22168C2 )

        ic(thresh, 2.6117239648121 182150E-1 , -2, 1.0B70C6D604DD4 )
        ic(PIo4, 7.8539816339744 827900E-1 , -1, 1.921FB54442D18 )
        ic(PIo2, 1.5707963267948 965580E0 , 0, 1.921FB54442D18 )
        ic(PI3o4, 2.3561944901923 448370E0 , 1, 1.2D97C7F3321D2 )
        ic(PI, 3.1415926535897 931160E0 , 1, 1.921FB54442D18 )
        ic(PI2, 6.2831853071795 862320E0 , 2, 1.921FB54442D18 )

        #ifdef vccast
        #define thresh vccast(thresh)
        #define PIo4 vccast(PIo4)
        #define PIo2 vccast(PIo2)
        #define PI3o4 vccast(PI3o4)
        #define PI vccast(PI)
        #define PI2 vccast(PI2)
        #endif

        #ifdef national
        static long fmaxx[] = { 0xffffffff, 0x7fefffff};
        #define fmax (*(double*)fmax x)
        #endif /* national */

        static const double
        zero = 0,
        one = 1,
        negone = -1,
        half = 1.0/2.0,
        small = 1E-10, /* 1+small**2 == 1; better values for small:
        * small = 1.5E-9 for VAX D
        * = 1.2E-8 for IEEE Double
        * = 2.8E-10 for IEEE Extended
        */
        big = 1E20; /* big := 1/(small**2) */

        /* sin__S(x*x) ... re-implemented as a macro
        * DOUBLE PRECISION (VAX D format 56 bits, IEEE DOUBLE 53 BITS)
        * STATIC KERNEL FUNCTION OF SIN(X), COS(X), AND TAN(X)
        * CODED IN C BY K.C. NG, 1/21/85;
        * REVISED BY K.C. NG on 8/13/85.
        *
        * sin(x*k) - x
        * RETURN --------------- on [-PI/4,PI/4] , where k=pi/PI, PI is the
        rounded
        * x
        * value of pi in machine precision:
        *
        * Decimal:
        * pi = 3.1415926535897 93 23846264338327 .....
        * 53 bits PI = 3.1415926535897 93 115997963 ..... ,
        * 56 bits PI = 3.1415926535897 93 227020265 ..... ,
        *
        * Hexadecimal:
        * pi = 3.243F6A8885A30 8D313198A2E....
        * 53 bits PI = 3.243F6A8885A30 = 2 * 1.921FB54442D18
        * 56 bits PI = 3.243F6A8885A30 8 = 4 * .C90FDAA22168C2
        *
        * Method:
        * 1. Let z=x*x. Create a polynomial approximation to
        * (sin(k*x)-x)/x = z*(S0 + S1*z^1 + ... + S5*z^5).
        * Then
        * sin__S(x*x) = z*(S0 + S1*z^1 + ... + S5*z^5)
        *
        * The coefficient S's are obtained by a special Remez algorithm.
        *
        * Accuracy:
        * In the absence of rounding error, the approximation has absolute error
        * less than 2**(-61.11) for VAX D FORMAT, 2**(-57.45) for IEEE DOUBLE.
        *
        * Constants:
        * The hexadecimal values are the intended ones for the following constants.
        * The decimal values may be used, provided that the compiler will convert
        * from decimal to binary accurately enough to produce the hexadecimal
        values
        * shown.
        *
        */

        vc(S0, -1.6666666666666 646660E-1 ,aaaa,bf2a,aa71 ,aaaa, -2,
        -.AAAAAAAAAAAA71 )
        vc(S1, 8.3333333333297 230413E-3 ,8888,3d08,477f ,8888,
        -6, .8888888888477F )
        vc(S2, -1.9841269838362 403710E-4 ,0d00,ba50,1057 ,cf8a, -12,
        -.D00D00CF8A1057 )
        vc(S3, 2.7557318019967 078930E-6 ,ef1c,3738,bedc ,a326,
        -18, .B8EF1CA326BEDC )
        vc(S4, -2.5051841873876 551398E-8 ,3195,b3d7,e1d3 ,374c, -25,
        -.D73195374CE1D3 )
        vc(S5, 1.6028995389845 827653E-10 ,3d9c,3030,cccc ,6d26,
        -32, .B03D9C6D26CCCC )
        vc(S6, -6.2723499671769 283121E-13 ,8d0b,ac30,ea82 ,7561, -40,
        -.B08D0B7561EA82 )

        ic(S0, -1.6666666666666 463126E-1 , -3, -1.555555555550C )
        ic(S1, 8.3333333332992 771264E-3 , -7, 1.111111110C461 )
        ic(S2, -1.9841269816180 999116E-4 , -13, -1.A01A019746345 )
        ic(S3, 2.7557309793219 876880E-6 , -19, 1.71DE3209CDCD9 )
        ic(S4, -2.5050225177523 807003E-8 , -26, -1.AE5C0E319A4EF )
        ic(S5, 1.5868926979889 205164E-10 , -33, 1.5CF61DF672B13 )

        #ifdef vccast
        #define S0 vccast(S0)
        #define S1 vccast(S1)
        #define S2 vccast(S2)
        #define S3 vccast(S3)
        #define S4 vccast(S4)
        #define S5 vccast(S5)
        #define S6 vccast(S6)
        #endif

        #if defined(vax)||d efined(tahoe)
        # define sin__S(z) (z*(S0+z*(S1+z* (S2+z*(S3+z*(S4 +z*(S5+z*S6)))) )))
        #else /* defined(vax)||d efined(tahoe) */
        # define sin__S(z) (z*(S0+z*(S1+z* (S2+z*(S3+z*(S4 +z*S5))))))
        #endif /* defined(vax)||d efined(tahoe) */

        /* cos__C(x*x) ... re-implemented as a macro
        * DOUBLE PRECISION (VAX D FORMAT 56 BITS, IEEE DOUBLE 53 BITS)
        * STATIC KERNEL FUNCTION OF SIN(X), COS(X), AND TAN(X)
        * CODED IN C BY K.C. NG, 1/21/85;
        * REVISED BY K.C. NG on 8/13/85.
        *
        * x*x
        * RETURN cos(k*x) - 1 + ----- on [-PI/4,PI/4], where k = pi/PI,
        * 2
        * PI is the rounded value of pi in machine precision :
        *
        * Decimal:
        * pi = 3.1415926535897 93 23846264338327 .....
        * 53 bits PI = 3.1415926535897 93 115997963 ..... ,
        * 56 bits PI = 3.1415926535897 93 227020265 ..... ,
        *
        * Hexadecimal:
        * pi = 3.243F6A8885A30 8D313198A2E....
        * 53 bits PI = 3.243F6A8885A30 = 2 * 1.921FB54442D18
        * 56 bits PI = 3.243F6A8885A30 8 = 4 * .C90FDAA22168C2
        *
        *
        * Method:
        * 1. Let z=x*x. Create a polynomial approximation to
        * cos(k*x)-1+z/2 = z*z*(C0 + C1*z^1 + ... + C5*z^5)
        * then
        * cos__C(z) = z*z*(C0 + C1*z^1 + ... + C5*z^5)
        *
        * The coefficient C's are obtained by a special Remez algorithm.
        *
        * Accuracy:
        * In the absence of rounding error, the approximation has absolute error
        * less than 2**(-64) for VAX D FORMAT, 2**(-58.3) for IEEE DOUBLE.
        *
        *
        * Constants:
        * The hexadecimal values are the intended ones for the following constants.
        * The decimal values may be used, provided that the compiler will convert
        * from decimal to binary accurately enough to produce the hexadecimal
        values
        * shown.
        */

        vc(C0, 4.1666666666666 504759E-2 ,aaaa,3e2a,a9f0 ,aaaa,
        -4, .AAAAAAAAAAA9F0 )
        vc(C1, -1.3888888888865 302059E-3 ,0b60,bbb6,0cca ,b60a, -9,
        -.B60B60B60A0CCA )
        vc(C2, 2.4801587285601 038265E-5 ,0d00,38d0,098f ,cdcd,
        -15, .D00D00CDCD098F )
        vc(C3, -2.7557313470902 390219E-7 ,f27b,b593,e805 ,b593, -21,
        -.93F27BB593E805 )
        vc(C4, 2.0875623401082 232009E-9 ,74c8,320f,3ff0 ,fa1e,
        -28, .8F74C8FA1E3FF0 )
        vc(C5, -1.1355178117642 986178E-11 ,c32d,ae47,5a63 ,0a5c, -36,
        -.C7C32D0A5C5A63 )

        ic(C0, 4.1666666666666 504759E-2 , -5, 1.555555555553E )
        ic(C1, -1.3888888888865 301516E-3 , -10, -1.6C16C16C14199 )
        ic(C2, 2.4801587269650 015769E-5 , -16, 1.A01A01971CAEB )
        ic(C3, -2.7557304623183 959811E-7 , -22, -1.27E4F1314AD1A )
        ic(C4, 2.0873958177697 780076E-9 , -29, 1.1EE3B60DDDC8C )
        ic(C5, -1.1250289076471 311557E-11 , -37, -1.8BD5986B2A52E )

        #ifdef vccast
        #define C0 vccast(C0)
        #define C1 vccast(C1)
        #define C2 vccast(C2)
        #define C3 vccast(C3)
        #define C4 vccast(C4)
        #define C5 vccast(C5)
        #endif

        #define cos__C(z) (z*z*(C0+z*(C1+ z*(C2+z*(C3+z*( C4+z*C5))))))
        ----------------------------------------------------------------------------

        Comment

        • Christian Bau

          #34
          Re: Sine code for ANSI C

          In article <409764B2.4E091 4FA@yahoo.com>,
          CBFalconer <cbfalconer@yah oo.com> wrote:
          [color=blue]
          > At any rate, the ideal system produces instantaneous results with
          > no range limitations and the full accuracy of the arithmetic
          > representation. So the designer should pick a compromise that
          > will suffice for most anticipated users. Here there be dragons.[/color]

          Alternatively, a choice of functions that use different compromises
          between speed and precision. Let the user of the library decide what is
          more important.

          Comment

          • Christian Bau

            #35
            Re: Sine code for ANSI C

            In article <BNKlc.149955$L 31.89062@nwrddc 01.gnilink.net> ,
            "P.J. Plauger" <pjp@dinkumware .com> wrote:
            [color=blue]
            > We aim to please more exacting users, however, to the point that
            > they often don't notice the pains we've taken to stay out of their
            > way.[/color]

            Just a thought: High quality arithmetic especially benefits the clueless
            user, who would have no idea why things go wrong if they go wrong and
            who would have no chance to fix things if they go wrong.

            Comment

            • CBFalconer

              #36
              Re: Sine code for ANSI C

              Christian Bau wrote:[color=blue]
              > CBFalconer <cbfalconer@yah oo.com> wrote:
              >[color=green]
              > > At any rate, the ideal system produces instantaneous results with
              > > no range limitations and the full accuracy of the arithmetic
              > > representation. So the designer should pick a compromise that
              > > will suffice for most anticipated users. Here there be dragons.[/color]
              >
              > Alternatively, a choice of functions that use different compromises
              > between speed and precision. Let the user of the library decide what
              > is more important.[/color]

              That is not as feasible as changing complete libraries. For one
              thing, the language specifies the names involved. For another, an
              efficient library is highly interlinked. A routine to accept an
              argument and a pointer to a structure with a list of coefficients
              for polynomial expansion is likely to be heavily used. So a mix
              and match strategy is very likely to create virtually insoluble
              bugs. The need for -lm in linking makes this easy, by simply
              replacing the library proper at the appropriate times.

              --
              fix (vb.): 1. to paper over, obscure, hide from public view; 2.
              to work around, in a way that produces unintended consequences
              that are worse than the original problem. Usage: "Windows ME
              fixes many of the shortcomings of Windows 98 SE". - Hutchison


              Comment

              • P.J. Plauger

                #37
                Re: Sine code for ANSI C

                "Christian Bau" <christian.bau@ cbau.freeserve. co.uk> wrote in message
                news:christian. bau-8696B7.20192204 052004@slb-newsm1.svr.pol. co.uk...
                [color=blue]
                > In article <W1wlc.78210$G_ .55362@nwrddc02 .gnilink.net>,
                > "P.J. Plauger" <pjp@dinkumware .com> wrote:
                >[color=green]
                > > I've gone to both extremes over the past several decades.
                > > Our latest math library, still in internal development,
                > > can get exact function values for *all* argument values.
                > > It uses multi-precision argument reduction that can gust
                > > up to over 4,000 bits [sic]. "The Standard C Library"
                > > represents an intermediate viewpoint -- it stays exact
                > > until about half the fraction bits go away.
                > >
                > > I still haven't decided how hard we'll try to preserve
                > > precision for large arguments in the next library we ship.[/color]
                >
                > Obviously you don't get _exact_ values for any argument except sin (0.0)
                > :-).[/color]

                And +1, and -1, of course. Yes, IEC 60559 requires that the
                floating-point inexact status bit be set for any value that
                can't be precisely represented, and we do that. What I
                meant by "exact" above was shorthand for "the nearest
                internal representation to the actual function value, taking
                the argument(s) as exact."
                [color=blue]
                > I guess you mean "the exact value, correctly rounded to the nearest
                > valid floating point number", since libraries that round to one of the
                > two nearest valid floating point numbers are already available?[/color]

                Several libraries do give the best rounded result for sin(x), for
                small enough x. One or two are perverse enough to do proper
                argument reduction over a huge range, perhaps even the full
                range of values. Several more sensible libraries compute a very
                good sin(x) over a large but not unbounded range. I've yet to
                decide which of these choices best serves our customers.

                P.J. Plauger
                Dinkumware, Ltd.



                Comment

                • Paul Hsieh

                  #38
                  Re: Sine code for ANSI C

                  -wombat- <scottm@cs.ucla .edu> wrote:[color=blue]
                  > suri wrote:[color=green]
                  > > No i knew that faq.
                  > > my question is where is the sine fn imlementation?
                  > > since u say its a native instruction set means that there is no code in
                  > > ansi C for implementing sine.?[/color]
                  >
                  > Some platforms have hardware instructions that compute sin() and the
                  > compiler will emit them, bypassing the libm library's implementation. This
                  > is pretty much true for ia32 and ia64.[/color]

                  I'm pretty sure its not true of IA64 and its not really true of AMD64
                  either. (Though I personally don't agree with the choice in either
                  case.)

                  As to the original question:



                  shows some techniques that I think are close to the state of the art.

                  --
                  Paul Hsieh
                  Pobox has been discontinued as a separate service, and all existing customers moved to the Fastmail platform.


                  Comment

                  • Christian Bau

                    #39
                    Re: Sine code for ANSI C

                    In article <DKXlc.27284$sK 3.10728@nwrddc0 3.gnilink.net>,
                    "P.J. Plauger" <pjp@dinkumware .com> wrote:
                    [color=blue]
                    > "Christian Bau" <christian.bau@ cbau.freeserve. co.uk> wrote in message
                    > news:christian. bau-8696B7.20192204 052004@slb-newsm1.svr.pol. co.uk...
                    >[color=green]
                    > > Obviously you don't get _exact_ values for any argument except sin (0.0)
                    > > :-).[/color]
                    >
                    > And +1, and -1, of course.[/color]

                    Of course not. sin (+1) and sin (-1) are irrational, and sin (x) is not
                    +/- 1 for any rational x.
                    [color=blue]
                    > What I
                    > meant by "exact" above was shorthand for "the nearest
                    > internal representation to the actual function value, taking
                    > the argument(s) as exact."[/color]

                    That would be "the exact value, correctly rounded to the nearest valid
                    floating point number", right?
                    [color=blue]
                    > Several libraries do give the best rounded result for sin(x), for
                    > small enough x.[/color]

                    I think that is actually not too difficult if you write could
                    specifically for one floating point implementation: Do the calculation
                    in double double precision, then round to double. Estimate the maximum
                    rounding error which should be only a tiny bit more than 1/2 ulp. Then
                    some non-trivial maths should give you a small set of worst-case numbers
                    (those values of x such that sin (x) is almost exactly between two
                    neighbouring floating point numbers).

                    If you are very careful to keep the rounding errors small then you
                    should have very few values x for which you cannot prove that you get
                    the correct result. Check those values by hand (even if you cannot
                    _prove_ that sin (x) is rounded correctly, about half the time or more
                    it should happen anyway). Play around with coefficients to get the
                    correct result more often, and write an explicit test for the remaining
                    cases.
                    [color=blue]
                    > One or two are perverse enough to do proper
                    > argument reduction over a huge range, perhaps even the full
                    > range of values.[/color]

                    flibm does that. And an implementation that is suitable for Java is
                    forced to do this by the Java standard.
                    [color=blue]
                    > Several more sensible libraries compute a very
                    > good sin(x) over a large but not unbounded range. I've yet to
                    > decide which of these choices best serves our customers.[/color]

                    Some people would just pick which choice produces the fastest benchmark
                    results :-(

                    An interesting mathematical problem: Find approximations for sine and
                    cosine, such that

                    s*s + c*c <= 1.0

                    using floating-point arithmetic is guaranteed.

                    Comment

                    • Dik T. Winter

                      #40
                      Re: Sine code for ANSI C

                      In article <4097D0D7.19456 9F1@yahoo.com> cbfalconer@worl dnet.att.net writes:
                      ....[color=blue]
                      > I meant something that was automatically associated with each
                      > value, and revised by the process of calculation. For example,
                      > multiplying something with 3 digit precision by something with two
                      > digit precision can have no better that 2 digit precision,
                      > regardless of actual significant bits generated. Subtraction of
                      > similar sized numbers can produce violent reductions. This is the
                      > sort of thing one can build into a custom software FP
                      > representation, rather than doing some sort of worst case analysis
                      > up front.[/color]

                      This would mean that almost all results in numerical algebra would be
                      without value. E.g. calculating the solution of a system of 40 linear
                      equations might easily result in something without value in this measure.
                      Moreover, you have to be extremely careful here. I once designed an
                      arcsine routine where subtraction of equal sized numbers was commonplace,
                      however, in the ultimate result the introduced inaccuracies did not play
                      a role, as careful analysis did prove.
                      [color=blue][color=green]
                      > > No, they're orthogonal to that issue, as I keep trying to explain
                      > > to you. We do the best we can with the function arguments given.
                      > > It is the inescapable responsibility of the programmer to
                      > > understand the effect of uncertainties in those argument values on
                      > > the uncertainties in the function results.[/color]
                      >
                      > Where you assume that those arguments are exact. In reality, if
                      > they come from almost any physical entity, they express some form
                      > of measurement range. The only exactness is in the individual
                      > terms of a Taylor series, for example. If you compute something
                      > to 3 ulp, a 1/2 bit error in the argument value is practically
                      > meaningless and can usually be ignored, barring a nearby
                      > singularity. All of which leads to the same thing.[/color]

                      Wrong. If you compute (for instance) the sine with an accuracy of 3
                      ulp, that means that you expect exact values. A 1/2 bit error in the
                      argument might give a huge change in the result.
                      --
                      dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
                      home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/

                      Comment

                      • Grumble

                        #41
                        Re: Sine code for ANSI C

                        Paul Hsieh wrote:
                        [color=blue]
                        > -wombat- <scottm@cs.ucla .edu> wrote:
                        >[color=green]
                        >> Some platforms have hardware instructions that compute sin()
                        >> and the compiler will emit them, bypassing the libm library's
                        >> implementation. This is pretty much true for ia32 and ia64.[/color]
                        >
                        > I'm pretty sure its not true of IA64 and its not really true of
                        > AMD64 either.[/color]

                        AMD64 supports x87. Thus FSIN is available. What did you mean by
                        "not really true of AMD64 either" ?

                        AMD delivers leadership high-performance and adaptive computing solutions to advance data center AI, AI PCs, intelligent edge devices, gaming, & beyond.

                        [color=blue]
                        > --
                        > Paul Hsieh
                        > http://www.pobox.com/~qed/
                        > http://bstring.sf.net/[/color]

                        The signature delimiter is DASH-DASH-SPACE.
                        Yours is missing the last character.

                        Comment

                        • Tim Prince

                          #42
                          Re: Sine code for ANSI C


                          "Grumble" <invalid@kma.eu .org> wrote in message
                          news:c7d4nm$d7t $1@news-rocq.inria.fr.. .[color=blue]
                          > Paul Hsieh wrote:
                          >[color=green]
                          > > -wombat- <scottm@cs.ucla .edu> wrote:
                          > >[color=darkred]
                          > >> Some platforms have hardware instructions that compute sin()
                          > >> and the compiler will emit them, bypassing the libm library's
                          > >> implementation. This is pretty much true for ia32 and ia64.[/color]
                          > >
                          > > I'm pretty sure its not true of IA64 and its not really true of
                          > > AMD64 either.[/color]
                          >
                          > AMD64 supports x87. Thus FSIN is available. What did you mean by
                          > "not really true of AMD64 either" ?
                          >[/color]
                          Possibly referring to compilers complying with ABIs disallowing x87, or
                          taking advantage of higher performance of SSE parallel libraries. Use of
                          fsin on IA64 is extremely unlikely, even though it's still there.


                          Comment

                          • Dan Pop

                            #43
                            Re: Sine code for ANSI C

                            In <W1wlc.78210$G_ .55362@nwrddc02 .gnilink.net> "P.J. Plauger" <pjp@dinkumware .com> writes:
                            [color=blue]
                            >"osmium" <r124c4u102@com cast.net> wrote in message
                            >news:c75iuo$i4 0p9$1@ID-179017.news.uni-berlin.de...
                            >[color=green]
                            >> CBFalconer writes:
                            >>[color=darkred]
                            >> > "P.J. Plauger" wrote:
                            >> > >
                            >> > ... snip ...
                            >> > > coefficients. But doing proper argument reduction is an open
                            >> > > ended exercise in frustration. Just reducing the argument modulo
                            >> > > 2*pi quickly accumulates errors unless you do arithmetic to
                            >> > > many extra bits of precision.
                            >> >
                            >> > And that problem is inherent. Adding precision bits for the
                            >> > reduction will not help, because the input value doesn't have
                            >> > them. It is the old problem of differences of similar sized
                            >> > quantities.[/color]
                            >>
                            >> Huh? If I want the phase of an oscillator after 50,000 radians are you
                            >> saying that is not computable? Please elaborate.
                            >>
                            >> There was a thread hereabouts many months ago on this very subject and[/color]
                            >AFAIK[color=green]
                            >> no one suggested that it was not computable, it just couldn't be done with
                            >> doubles. And I see no inherent problems.[/color]
                            >
                            >Right. This difference of opinion highlights two conflicting
                            >interpretation s of floating-point numbers:
                            >
                            >1) They're fuzzy. Assume the first discarded bit is
                            >somewhere between zero and one. With this viewpoint,
                            >CBFalconer is correct that there's no point in trying
                            >to compute a sine accurately for large arguments --
                            >all the good bits get lost.
                            >
                            >2) They are what they are. Assume that every floating-point
                            >representati on exactly represents some value, however that
                            >representati on arose. With this viewpoint, osmium is correct
                            >that there's a corresponding sine that is worth computing
                            >to full machine precision.
                            >
                            >I've gone to both extremes over the past several decades.
                            >Our latest math library, still in internal development,
                            >can get exact function values for *all* argument values.
                            >It uses multi-precision argument reduction that can gust
                            >up to over 4,000 bits [sic]. "The Standard C Library"
                            >represents an intermediate viewpoint -- it stays exact
                            >until about half the fraction bits go away.
                            >
                            >I still haven't decided how hard we'll try to preserve
                            >precision for large arguments in the next library we ship.[/color]

                            Why bother? Floating point numbers *are* fuzzy. Whoever sticks to the
                            second interpretation has no more clues about floating point than the
                            guys who expect 0.1 to be accurately represented in binary floating point.

                            Dan
                            --
                            Dan Pop
                            DESY Zeuthen, RZ group
                            Email: Dan.Pop@ifh.de

                            Comment

                            • P.J. Plauger

                              #44
                              Re: Sine code for ANSI C

                              "Dan Pop" <Dan.Pop@cern.c h> wrote in message
                              news:c7qnv8$jrj $2@sunnews.cern .ch...
                              [color=blue][color=green]
                              > > This difference of opinion highlights two conflicting
                              > >interpretation s of floating-point numbers:
                              > >
                              > >1) They're fuzzy. Assume the first discarded bit is
                              > >somewhere between zero and one. With this viewpoint,
                              > >CBFalconer is correct that there's no point in trying
                              > >to compute a sine accurately for large arguments --
                              > >all the good bits get lost.
                              > >
                              > >2) They are what they are. Assume that every floating-point
                              > >representati on exactly represents some value, however that
                              > >representati on arose. With this viewpoint, osmium is correct
                              > >that there's a corresponding sine that is worth computing
                              > >to full machine precision.
                              > >
                              > >I've gone to both extremes over the past several decades.
                              > >Our latest math library, still in internal development,
                              > >can get exact function values for *all* argument values.
                              > >It uses multi-precision argument reduction that can gust
                              > >up to over 4,000 bits [sic]. "The Standard C Library"
                              > >represents an intermediate viewpoint -- it stays exact
                              > >until about half the fraction bits go away.
                              > >
                              > >I still haven't decided how hard we'll try to preserve
                              > >precision for large arguments in the next library we ship.[/color]
                              >
                              > Why bother? Floating point numbers *are* fuzzy. Whoever sticks to the
                              > second interpretation has no more clues about floating point than the
                              > guys who expect 0.1 to be accurately represented in binary floating point.[/color]

                              Sorry, but some of our customers are highly clued and they *do*
                              know when their floating-point numbers are fuzzy and when they're
                              not. In the latter case, the last thing they want/need is for us
                              library writers to tell them that we've taken the easy way out
                              on the assumption that their input values to our functions are
                              fuzzier than they think they are.

                              It's the job of the library writer to return the best internal
                              approximation to the function value for a given input value
                              *treated as an exact number.* If the result has fuzz in a
                              particular application, it's up to the authors of that
                              application to analyze the consequence.

                              P.J. Plauger
                              Dinkumware, Ltd.



                              Comment

                              • Dan Pop

                                #45
                                Re: Sine code for ANSI C

                                In <Zy7oc.186082$L 31.13537@nwrddc 01.gnilink.net> "P.J. Plauger" <pjp@dinkumware .com> writes:
                                [color=blue]
                                >Sorry, but some of our customers are highly clued and they *do*
                                >know when their floating-point numbers are fuzzy and when they're
                                >not.[/color]

                                Concrete examples, please.

                                Dan
                                --
                                Dan Pop
                                DESY Zeuthen, RZ group
                                Email: Dan.Pop@ifh.de

                                Comment

                                Working...