hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

10K
active users

At some point, I'll learn CORDIC. But part of me doesn't want to because I don't want the magic ruined. Kinda like Rubik's cubes.

It feels like like a cheat code to calculate complex functions. Like, it shouldn't work at all.

According to @kenshirriff, the original 8087 uses CORDIC. With all the rounding bullshit that FP arith has to do it surprises me that it's a good fit.

But I also don't understand it well.

Cassandrich

@cr1901 @kenshirriff CORDIC has no role in arithmetic, only trig functions. The x87 trig insns have always been useless for implementing decent trig functions. Slower & less accurate than the obvious implementations.

@cr1901 @dalias @kenshirriff @steve
Taylor and McLauren series are approximations at a point with exponential error outside of that point, where minimax and chebyshev, etc are best approximations over a range.

@fatlimey @dalias @kenshirriff @steve Okay, I didn't know Taylor gets exponentially worse the further you get from the point of interest.

Does that mean for hand-calc tables in the Bad Old Times, you'd need to recalculate the Taylor series at each new "x" of interest (since that's the point where a finite truncation of a Taylor series will be most correct/guaranteed to exist even if ROC is 0)?

I don't know about hand calculations, but it was Chebyshev who set out the idea of a best polynomial approximation - an order-N curve that crosses the zero N+1 times and approaches the maximum error N+2 times over the range of interest. Remez took that model and solved it numerically with the Remez exchange algorithm.

valelab4.ucsf.edu/svn/3rdparty

@cr1901 @fatlimey @dalias @kenshirriff Technically it only gets polynomially worse--for sufficiently smooth functions, in a neighborhood of the point of interest, the error of Taylor is bounded by a constant times the first term omitted from the series. But this detail doesn't change that the error is very unevenly distributed, and using an approximation that distributes the error more evenly is essentially always what you want.

@steve @fatlimey @dalias @kenshirriff When I learned Taylor Series for the first time, I felt like a God because it gave me a handle on those pesky sin/cos/exp functions.

But I haven't used them much since. I guess I missed the memo on their weaknesses as approximations outside of cool demos like "here's exp as a polynomial".

@cr1901 @fatlimey @dalias @kenshirriff Taylor series are incredibly powerful! They are (one of) the best approximations at a point. It's just that when you're doing computation, you usually want a best approximation over an interval, not at a point.

@steve @cr1901 @dalias @kenshirriff Physics uses Taylor series to get mathematical insights on complex behaviors by breaking things down into "important" and "unimportant" parts. Computer arithmetic and computation started with physicists, but computer science has different concerns and drives and the subjects have diverged over the years.

Some physicists think their "I do it all from first principles" methods are great, where for computation they're often the most basic first pass.

@dalias @cr1901 @kenshirriff what they said. A few steps of CORDIC still useful for atan(2) argument reduction, more if the only fast operation you have is addition. Otherwise use polynomial approximations everywhere.

@steve @dalias @kenshirriff Which polynomial approximation should you use? Other than "not Taylor"?

@cr1901 @dalias @kenshirriff generally speaking, minimax, unless you have non-standard requirements for the approximation.

@steve @dalias @kenshirriff Skill issue on my part, but... this seems not-so-fun to calculate by hand:

en.wikipedia.org/wiki/Chebyshe

At least for a given "n", all but one of the terms of the sum on the RHS are "0"...

en.wikipedia.orgChebyshev polynomials - Wikipedia

@cr1901 @dalias @kenshirriff they actually are pretty easy to calculate (you "just" construct the appropriate linear system and solve it), but also there are lots and lots of pre-calculated ones readily available, e.g. publik-void.github.io/sin-cos-

publik-void.github.ioFast MiniMax Polynomial Approximations of Sine and Cosine

@pervognsen @pkhuong @steve @cr1901 @dalias @kenshirriff Sollya can do tricks that Mathematica and Maple cannot - like optimizing specific coefficients to machine representation and fitting the rest while skipping powers you are uninterested in.

@fatlimey @pervognsen @pkhuong @steve @cr1901 @dalias @kenshirriff FWIW: I haven't read the paper yet nor tried the example code but there's been a bit of progress WRT rational approximates.

gitlab.inria.fr/sfilip/rminima

The paper is at one of the last ARITH (if not mentioned w the code)

GitLabSilviu FILIP / rminimax · GitLabGitlab at Inria

@steve @cr1901 @kenshirriff What is atan argument reduction? atan isn't periodic.

@dalias @cr1901 @kenshirriff that doesn’t mean you can’t reduce the argument! log and exp aren’t periodic either (at least, not on the real line), and we do argument reduction for them as well.

@dalias @cr1901 @kenshirriff expanding a bit more, I would say argument reduction is any process you use to go from evaluating a function on a larger interval to evaluating a related function on a smaller (in the sense of “computationally easier”) interval.

@steve @cr1901 @kenshirriff Yes, I didn't immediately realize how that would work for atan but surely there are trig identities or whatnot that should do it, and I guess you apply CORDIC to the inverse functions involved in that.