I'm not sure converting a decimal into a rational number can be done accurately. First, computers only store decimals to so much accuracy. Therefore, we can't necessarily know whether a number repeats or not. The computer stores in binary.
decimal 0.43 = binary 0.0110111000001 ...
In fact, in order for the decimal number to be a finite binary number, it must end with a five. Even then, it may not be stored accurately if the number is 0.1987651321949 865132168795, for example.
Otherwise, suppose the number input is binary 0.1101011001010 1
Then, convert this to 11010110010101/100000000000000 . Then calculate gcd(11010110010 101,10000000000 0000). Divide both 11010110010101 and 100000000000000 by the result, and that's the best you can do.
Comment