Herman
Well-known member
I have a small problem I can't quite figure out, I'm sure it's quite easy though. I need to find the lowest useful precision of a decimal number. For example, let's say I have a Decimal variable that contains 0.003. It's lowest useful precision is 0.001. If it's 2, the lowest useful precision is 1, 234.2456 = 0.0001, etc.. Is there a straightforward way to get this result? I tried various calculations, but I cannot seem to get it right.
Thanks.
Thanks.