Without working through the details, I might say that it's not necessarily that there's anything wrong with the arithmetic, but that non-linear functions will require greater intermediate precision to get what you seem to be after.
For example, suppose we only had 4-digit floating-point precision, and we square 2500, and take the log. Going to 6 digits, the answer is 15.6481, but we only have 4, so we'll round (correctly) to 15.65. The antilog of that number is 6261936, but we only have 4 digits, so we round, again correctly, to 6,262,000. We've done everything correctly, and yet half of our 4 digits are already wrong. (2500 squared is 6250000.) Taking the square root and correctly rounding, we get 2502. Actually the rounding in this one improved the accuracy, but the last digit is two counts off. If you do this several times, the error will compound.
Or take the cosine and arccosine functions. The cosine of .5625º still rounds to 1.000 if you only have 4 digits. When you take the arccos of that, you correctly get 0.000. Now you have 0.000 versus .5625º. If you take sine of .5735, you get .9999, but then taking the arccosine of .9999, you get .8103, so even the most significant digit is way off.
So does that mean we need 20 or 30 digits though?
I can't speak for others' fields of work; but for my own, generally the loss of precision means that somewhere along the way, it didn't matter that much. If you're taking the tangent of a number near ±90º, what appears to be a huge loss of accuracy in the tangent may translate to an insignificant error in the angle. Keeping 2-digit accuracy in the tangent on your real-life product may require impossibly tight control of the angle, and you probably approached the design entirely wrong.
For a given number of digits' precision, it is impossible to have the accuracy be the same number of digits for certain functions. It doesn't mean anything is wrong. The 10-12-digit precision of most calculators is far beyond what most of us need for real-life applications.