I received a message on an excellent discussion group that I follow (retrocomputingforum.com) regarding my last post on the divergence of my calculated Sin(x) results vs. ‘actual’ using PL/I and my sin(x) program.
The gist of the post was that the Taylor series *should* converge to the correct answer for all values of x, so long as there were sufficient terms in the Taylor series. The poster (EdS) went on to describe some tests he had done showing practical limits for x based on the number of terms in the series. Sure enough, 5 terms started to diverge above Pi, while 9 terms was good to over 2Pi.
In response, I rewrote my Sin(x) program to ask for user input: first the upper range (calculating from 0 to nPi where n is input) and the number of Taylor series terms (from 5 to 13). Using the new program, I calculated Sin(x) from 0 to 2Pi with 9 terms, and the results were accurate until close to 2Pi, instead of diverging almost immediately above Pi.
I tried 11 terms to 4Pi, but the program terminated with OVERFLOW error. I then tried going to 4Pi using 9 terms, but received a CONVERSION error. Both indicate the program is at the limit of the PL/I compiler floating-point precision, which is 24 bits for versions 1.0 and 1.3 (float binary(24)). I read in the docs that version 1.4 allows double precision (float binary(53)), so I found a copy and have now installed that.
Before I create a double-precision version of the program, I’ve turned on all debugging just to see where in the taylor series calculations the program is failing.
More to come…