Re: Sine code for ANSI C
"osmium" <r124c4u102@com cast.net> wrote in message
news:c75iuo$i40 p9$1@ID-179017.news.uni-berlin.de...
[color=blue]
> CBFalconer writes:
>[color=green]
> > "P.J. Plauger" wrote:[color=darkred]
> > >[/color]
> > ... snip ...[color=darkred]
> > > coefficients. But doing proper argument reduction is an open
> > > ended exercise in frustration. Just reducing the argument modulo
> > > 2*pi quickly accumulates errors unless you do arithmetic to
> > > many extra bits of precision.[/color]
> >
> > And that problem is inherent. Adding precision bits for the
> > reduction will not help, because the input value doesn't have
> > them. It is the old problem of differences of similar sized
> > quantities.[/color]
>
> Huh? If I want the phase of an oscillator after 50,000 radians are you
> saying that is not computable? Please elaborate.
>
> There was a thread hereabouts many months ago on this very subject and[/color]
AFAIK[color=blue]
> no one suggested that it was not computable, it just couldn't be done with
> doubles. And I see no inherent problems.[/color]
Right. This difference of opinion highlights two conflicting
interpretations of floating-point numbers:
1) They're fuzzy. Assume the first discarded bit is
somewhere between zero and one. With this viewpoint,
CBFalconer is correct that there's no point in trying
to compute a sine accurately for large arguments --
all the good bits get lost.
2) They are what they are. Assume that every floating-point
representation exactly represents some value, however that
representation arose. With this viewpoint, osmium is correct
that there's a corresponding sine that is worth computing
to full machine precision.
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.
I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.
P.J. Plauger
Dinkumware, Ltd.
"osmium" <r124c4u102@com cast.net> wrote in message
news:c75iuo$i40 p9$1@ID-179017.news.uni-berlin.de...
[color=blue]
> CBFalconer writes:
>[color=green]
> > "P.J. Plauger" wrote:[color=darkred]
> > >[/color]
> > ... snip ...[color=darkred]
> > > coefficients. But doing proper argument reduction is an open
> > > ended exercise in frustration. Just reducing the argument modulo
> > > 2*pi quickly accumulates errors unless you do arithmetic to
> > > many extra bits of precision.[/color]
> >
> > And that problem is inherent. Adding precision bits for the
> > reduction will not help, because the input value doesn't have
> > them. It is the old problem of differences of similar sized
> > quantities.[/color]
>
> Huh? If I want the phase of an oscillator after 50,000 radians are you
> saying that is not computable? Please elaborate.
>
> There was a thread hereabouts many months ago on this very subject and[/color]
AFAIK[color=blue]
> no one suggested that it was not computable, it just couldn't be done with
> doubles. And I see no inherent problems.[/color]
Right. This difference of opinion highlights two conflicting
interpretations of floating-point numbers:
1) They're fuzzy. Assume the first discarded bit is
somewhere between zero and one. With this viewpoint,
CBFalconer is correct that there's no point in trying
to compute a sine accurately for large arguments --
all the good bits get lost.
2) They are what they are. Assume that every floating-point
representation exactly represents some value, however that
representation arose. With this viewpoint, osmium is correct
that there's a corresponding sine that is worth computing
to full machine precision.
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.
I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.
P.J. Plauger
Dinkumware, Ltd.
Comment