Re: Sine code for ANSI C
"P.J. Plauger" wrote:[color=blue]
> "CBFalconer " <cbfalconer@yah oo.com> wrote in message
>[/color]
.... snip ...[color=blue]
>[color=green]
>> In a way it is unfortunate that hardware has
>> replaced software floating point, because it makes it
>> impracticable to fix problems.[/color]
>
> I don't understand that comment.[/color]
I mean that the library/routine creator is fairly well restricted
to the FP format supplied by the hardware, on penalty of
horrendous time penalties.
[color=blue]
>[color=green]
>> Not all applications require the same FP library, and the usual
>> library/linker etc construction of C applications makes such
>> customization fairly easy.[/color]
>
> Indeed. That's how we can sell replacement and add-on libraries.
>[/color]
.... snip ...[color=blue]
>[color=green]
>> A precision measure (count of significant bits) would be handy.[/color]
>
> That's the ulp measure described below.[/color]
I meant something that was automatically associated with each
value, and revised by the process of calculation. For example,
multiplying something with 3 digit precision by something with two
digit precision can have no better that 2 digit precision,
regardless of actual significant bits generated. Subtraction of
similar sized numbers can produce violent reductions. This is the
sort of thing one can build into a custom software FP
representation, rather than doing some sort of worst case analysis
up front.
The replacement library can hardly replace the underlying FP
operations. barring the sort of kluges designed to use or bypass
emulators.
[color=blue]
>[color=green][color=darkred]
>>> -- We provide full accuracy only for a handful of the commonest
>>> math functions. But we generally aim for worst-case errors of
>>> 3 ulp (units in the least-significant place) for float, 4 for
>>> double, and 5 for 113-bit long double. The rare occasions
>>> where we fail to achieve this goal are around the zeros of
>>> messy functions.[/color]
>>
>> Those objectives are generally broad enough to eliminate any worry
>> over whether the next missing bit is 0 or 1.[/color]
>
> No, they're orthogonal to that issue, as I keep trying to explain
> to you. We do the best we can with the function arguments given.
> It is the inescapable responsibility of the programmer to
> understand the effect of uncertainties in those argument values on
> the uncertainties in the function results.[/color]
Where you assume that those arguments are exact. In reality, if
they come from almost any physical entity, they express some form
of measurement range. The only exactness is in the individual
terms of a Taylor series, for example. If you compute something
to 3 ulp, a 1/2 bit error in the argument value is practically
meaningless and can usually be ignored, barring a nearby
singularity. All of which leads to the same thing.
Without suitable actions and corrections results can have
unexpected biases. The horrible example is failure to round (as
did some of the early Microsoft Basic fp routines, and my first
efforts :-). A more subtle example is rounding up (or down) from
0.5.
None of this is criticism, but I think it is essential to have a
clear idea of the possible failure mechanisms of our algorithms in
order to use them properly. Early nuclear pulse height analyzers
gave peculiar results until the effects of differential
non-linearity were recognized. After that the causes of such
non-linearity were often wierd and seemingly totally disconnected.
--
fix (vb.): 1. to paper over, obscure, hide from public view; 2.
to work around, in a way that produces unintended consequences
that are worse than the original problem. Usage: "Windows ME
fixes many of the shortcomings of Windows 98 SE". - Hutchison
"P.J. Plauger" wrote:[color=blue]
> "CBFalconer " <cbfalconer@yah oo.com> wrote in message
>[/color]
.... snip ...[color=blue]
>[color=green]
>> In a way it is unfortunate that hardware has
>> replaced software floating point, because it makes it
>> impracticable to fix problems.[/color]
>
> I don't understand that comment.[/color]
I mean that the library/routine creator is fairly well restricted
to the FP format supplied by the hardware, on penalty of
horrendous time penalties.
[color=blue]
>[color=green]
>> Not all applications require the same FP library, and the usual
>> library/linker etc construction of C applications makes such
>> customization fairly easy.[/color]
>
> Indeed. That's how we can sell replacement and add-on libraries.
>[/color]
.... snip ...[color=blue]
>[color=green]
>> A precision measure (count of significant bits) would be handy.[/color]
>
> That's the ulp measure described below.[/color]
I meant something that was automatically associated with each
value, and revised by the process of calculation. For example,
multiplying something with 3 digit precision by something with two
digit precision can have no better that 2 digit precision,
regardless of actual significant bits generated. Subtraction of
similar sized numbers can produce violent reductions. This is the
sort of thing one can build into a custom software FP
representation, rather than doing some sort of worst case analysis
up front.
The replacement library can hardly replace the underlying FP
operations. barring the sort of kluges designed to use or bypass
emulators.
[color=blue]
>[color=green][color=darkred]
>>> -- We provide full accuracy only for a handful of the commonest
>>> math functions. But we generally aim for worst-case errors of
>>> 3 ulp (units in the least-significant place) for float, 4 for
>>> double, and 5 for 113-bit long double. The rare occasions
>>> where we fail to achieve this goal are around the zeros of
>>> messy functions.[/color]
>>
>> Those objectives are generally broad enough to eliminate any worry
>> over whether the next missing bit is 0 or 1.[/color]
>
> No, they're orthogonal to that issue, as I keep trying to explain
> to you. We do the best we can with the function arguments given.
> It is the inescapable responsibility of the programmer to
> understand the effect of uncertainties in those argument values on
> the uncertainties in the function results.[/color]
Where you assume that those arguments are exact. In reality, if
they come from almost any physical entity, they express some form
of measurement range. The only exactness is in the individual
terms of a Taylor series, for example. If you compute something
to 3 ulp, a 1/2 bit error in the argument value is practically
meaningless and can usually be ignored, barring a nearby
singularity. All of which leads to the same thing.
Without suitable actions and corrections results can have
unexpected biases. The horrible example is failure to round (as
did some of the early Microsoft Basic fp routines, and my first
efforts :-). A more subtle example is rounding up (or down) from
0.5.
None of this is criticism, but I think it is essential to have a
clear idea of the possible failure mechanisms of our algorithms in
order to use them properly. Early nuclear pulse height analyzers
gave peculiar results until the effects of differential
non-linearity were recognized. After that the causes of such
non-linearity were often wierd and seemingly totally disconnected.
--
fix (vb.): 1. to paper over, obscure, hide from public view; 2.
to work around, in a way that produces unintended consequences
that are worse than the original problem. Usage: "Windows ME
fixes many of the shortcomings of Windows 98 SE". - Hutchison
Comment