Hi there,
I have several hundred statistical data sets, which are being displayed on the fly. The values can vary between 0.001 and 10000000 for the different sets.
Now, when displaying some statistics like min and max, it is ok to strip the decimal places for most of the variables, but not for all.
293873982.35 becomes 293873982
0.04 becomes 0
So, I implemented a series of IFs, to specify that if the value is between 1 and 10, than 1 decimal; if smaller as 1 than 2....
But I wonder if there is a more elegant way to this. A small mathematical algorithm, which does it "better".
Thanks for any ideas!
I have several hundred statistical data sets, which are being displayed on the fly. The values can vary between 0.001 and 10000000 for the different sets.
Now, when displaying some statistics like min and max, it is ok to strip the decimal places for most of the variables, but not for all.
293873982.35 becomes 293873982
0.04 becomes 0
So, I implemented a series of IFs, to specify that if the value is between 1 and 10, than 1 decimal; if smaller as 1 than 2....
But I wonder if there is a more elegant way to this. A small mathematical algorithm, which does it "better".
Thanks for any ideas!
Comment